Mercurial > hg > machine-learning-hw4
annotate sigmoidGradient.m @ 9:8dd249e99b5b default tip
Optimisations for backprop
author | Jordi Gutiérrez Hermoso <jordigh@octave.org> |
---|---|
date | Fri, 11 Nov 2011 20:36:02 -0500 |
parents | 55430128adcd |
children |
rev | line source |
---|---|
0 | 1 function g = sigmoidGradient(z) |
2
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
2 ##SIGMOIDGRADIENT returns the gradient of the sigmoid function |
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
3 ##evaluated at z |
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
4 ## g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function |
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
5 ## evaluated at z. This should work regardless if z is a matrix or a |
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
6 ## vector. In particular, if z is a vector or matrix, you should return |
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
7 ## the gradient for each element. |
0 | 8 |
2
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
9 s = @(z) 1./(1 + exp(-z)); |
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
10 g = s(z).*(1 - s(z)); |
0 | 11 |
2
55430128adcd
Implement sigmoidGradient.
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
12 endfunction |