Q12 – Function Representation and Network Capacity

Contributed by Pulkit Khandelwal.

Let us say that we are given two types of activation functions: linear and a hard threshold function as stated below:

  • Linear:  y = w_{0} + \sum_{i}w_{i}x_{i}
  • Hard Threshold:  y=\left\{  \begin{array}{@{}ll@{}}  1, & \text{if}\ w_{0} + \sum_{i}w_{i}x_{i} \geq 0 \\  0, & \text{otherwise}  \end{array}\right.

Which of the following can be exactly represented by a neural network with one hidden layer? You can use linear and/or threshold activation functions. Justify your answer with a brief explanation.

  1. polynomials of degree 2
  2. polynomials of degree 1
  3. hinge loss
  4. piecewise constant functions

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s