Maths behind training neural network using Gradient Descent

Well, recently I tried to contribute on a neural network project, I won’t lie I sucked miserably. But it is never too late to learn something and I will be pretty upset if I got another chance to contribute some thing as sexy as neural network and I suck once again. So here I go, learning neural network.

So what does a neural network model look like, it looks like a composition of bunch of layers and every layer contains bunch of neurons. The top most layer is called output layer and the down most layer is called input layer. Depending on the input the data propagates output. There are many many hidden layers resides in between these two. How many hidden layers would you love to have, it is a modeling problem, we need to consider many things to model our neural network. Every neurons are connected with every neurons of its next layer. Every node/neuron has its own activation function, every edge has its own weight, every layer has its own bias. So end of the day, this way or the other every neuron contributes to the output. Now when we are talking about training a neural network we are basically saying, we basically want to set the value of this weight, bias parameter in such a way so that for every input we get the correct result. How do we do that? Thats what this blog is all about.

 

So what can we do? We can go for Empirical risk minimization! So basically we are transforming the training problem to an optimization problem where we need to minimize a loss function of output and desired output. To save papers, or to impress academics we put it that way, argmin \frac{1}{T} \sum l(f(X^{t}, \theta), Y^{t})+\lambda\Omega (\theta), where f() is some function that predicts output, l() is some loss function of predicted output and real outputs. Ω() is a regularization function that takes Θ which is the weights we are predicting. Which has a use to filter on which values we don’t want to take. We will need to smooth our loss function because it is hard to optimize a non-smooth function.

From optimization literature we can use stochastic gradient descent to solve this. Θ= [w_1, b_1,…w_(i+1), b_(i+1)]. So we do what, For N iteration we will find

\Delta = - \Delta_{\Theta} l(f(X^{t}, \Theta}), Y^{t})  - \lambda \Delta_{\Theta} \Omega (\Theta))








\\\Theta \leftarrow \Theta + \alpha \Delta” align=”absmiddle” scale=”0″></p>
<p>This every iteration is known as epoch. So in every epoch we are finding a gradient, now we need to define those functions.</p>
<p>Lets discuss <strong>cost function</strong> first. As we see our cost function relies on another function f(). F() is basically a prediction function on probablity P(y=c|x), so it gives the probability of y being in class c, when x is given. We would like to maximize this probability, now as we are talking about we are framing this to a minimization problem, so we can define l() as a negative log likelihood problem.<img class=
l() is also known as cross entropy of information theory.

Now we will discuss about the gradients, the partial derivatives of our negative log function is
\frac{\delta}{\delta f(x)_{c}} - log f(x)_y =\frac{-1_{(y=c)}}{f(x)_{y}}

As we see, -1 was not a necessary part of the derivatives, but we are adding this as a filter, when y is not c it is 0 so it filters everything else for the term Fc.

So the gradient of the probablity function

\Delta _{f(x)}  - log f(x)_y = \frac{-1}{f(x)_{y}}  \left [ 1_{(y=0)} \\ .\\.\\.\\ 1_{(y=C-1)}  \right ]





<br />
\\ = \frac{ -e(y) }{f(x)_y}” align=”absmiddle” scale=”0″></p>
<p>Now we are interested in partial derivative of output <strong>pre-activation</strong> function:</p>
<p><img class=

Now we replace f() with a softmax function that basically normalizes the exponential of activation it over the summation of other exponentials.

\frac{-1}{f(x)_y}  * \frac{ \delta}{\delta a^{(L+1)} X_c}  softmax (a^{(L+1)} X)_y\\





<br />
\frac{-1}{f(x)_y}  * \frac{ \delta}{\delta a^{(L+1)} X_c}  \frac{exp(a^{(L+1)} X_y) }{\sum exp(a^{(L+1) }  x_c} \\” align=”absmiddle” scale=”0″></p>
<p>We got this formula for partial derivative of a ratio, <img class=. If we apply this on our previous equation we get this:

\frac{\delta}{\delta a^{(L+1)} X_c} - log f(x)_y = \frac{-1}{f(x)_y}  *  (1_{(y=c)} softmax(a^{(L+1)}  x_y - softmax(a^{(L+1)}  x_y softmax(a^{(L+1)}  x_c  )

But now we back on our f(x) we got:

\frac{\delta}{\delta a^{(L+1)} X_c} - log f(x)_y = \frac{-1}{f(x)_y}  *  (1_{(y=c) f(x)_y - f(x)_y f(x)_c})

\frac{\delta}{\delta a^{(L+1)} (x)_{c}}} - log f(x)_y =-(1_{(y=c) - f(x)_{c}})

So the gradient,

\Delta  a^{(L+1)}(x) - log f(x)_y = -(e(y)-f(x))–(i)

 

We will also need to find out the gradient of the hidden layers of neural network, if we calculate gradients for each neurons, we will grow old solving this. So we take this equation for chain rule, where a is the activation, p is the loss function, q is the preactivation layer above.

.\frac{\delta}{\delta a} p(a)= \sum_i  \frac{\delta p(a)}{\delta q_i(a)} *  \frac{\delta q_i(a)}{\delta a}

if we are k’th layer we are interested learn their gradients.

Partial derivative at j’th hidden unit and k’th hidden layer, with respect to the activation of hidden layer.

\frac{\delta}{\delta h^{k} x_j} -log f(x)_y= \sum_i \frac{ \delta - log f(x)_y}{\delta a^{(k+1)} x_i } \frac{\delta a^{k+1} x_i}{ \delta h^k x_j }





<br />
” align=”absmiddle” scale=”0″></p>
<p>a=b+∑W<br />
so differentiation of a will be W, no surprise. Now if we get the gradient of the preactivation it will remain the same.</p>
<p><img class=

Gradient,

\bigtriangledown h^k (x) - log f(x)_y





<br />
\\(W^{k+1})^T (\bigtriangledown_{ a^{k+1}(x)} -log f(x)_y)” align=”absmiddle” scale=”0″>—-(2)</p>
<p><img class=

which is basically because a (x)=b(k)+∑W(k) h(k-1) (x)

Partial derivatives of Biases:

\frac{\delta}{\delta b_{i}^k} -log f(x)_y




\\ =\frac { \delta – log f(x)_y }{ \delta a^{k} x_i} *  \frac { \delta a^{k} x_i }{ \delta b_{i}^k}




\\=\frac { \delta – log f(x)_y }{ \delta a^{k} x_i} * 1″ align=”absmiddle” scale=”0″></p>
<p>We see that somehow most of them has something in common, that is the gradient of pre-activation.</p>
<p>In backward propagation, we assume that we have got f(x) precomputed. So now for all the ks we will find out the gradients for layers. Theta in a 2 hidden layer neural network is defines as [W1,b1,W2,b2] so from my understanding so far, we put it in gradient descent algorithm. So for each iteration we have new value of Ws and bs, which we can put in our gradient descent algorithm.</p>
<ul>
<li>Compute the gradient of pre activation:<img class=

  • for K <= L+1 to 1
    • Compute gradient of hidden layer:
      \\\bigtriangledown_{ W^{k} (x)}  -log f(x)_y  =  (\bigtriangledown_{ a^k (x) } -log f(x)_y ) * h^{k-1} x^T\\\bigtriangledown_{ b^{k} (x)}  -log f(x)_y  = (\bigtriangledown_{ a^k (x)} -log f(x)_y ).
    • Compute gradient of hidden layer below:
      \\\bigtriangledown_{ h^{k-1} (x)}  -log f(x)_y  = W^{k}}^T (\bigtriangledown_{ a^k(x)

<br />
} -log f(x)_y ) ” align=”absmiddle” scale=”0″></li>
<li>Compute gradient of pre activation:<br />
<img class=
  •  

    How should we initialize our parameters? Well, looks like it is required a lot of insight to write about it in this blog, but I can say that there is a paper of Glarot and Bengio published in 2010 that suggested to have a gradient H_i,j = Uniform (-b,b). b=�?6/�?(H_k+ H_k-1)

     

    Thanks:
    1) Martin Thoma, for the inspiration
    2) Hugo Larochelle for the tutorial