This is my blog.

## Lesson 9 Cost function and backpropagation

### backpropagation algorithm

And you will find gradApprox ≈ deltaVector.

Once you have verified once that your backpropagation algorithm is correct, you don’t need to compute gradApprox again. The code to compute gradApprox can be very slow.

### In Octave

In order to use optimizing functions such as “fminunc()”, we will want to “unroll” all the elements and put them into one long vector:

If the dimensions of Theta1 is 10x11, Theta2 is 10x11 and Theta3 is 1x11, then we can get back our original matrices from the “unrolled” versions as follows:

and then we can use fminunc

### Random initial

Initializing all theta weights to zero does not work with neural networks(It does work in liner). When we backpropagate, all nodes will update to the same value repeatedly.