Mercurial > hg > machine-learning-hw5
annotate learningCurve.m @ 5:eddd33e57f6a default tip
Justify loop in trainLinearReg
author | Jordi Gutiérrez Hermoso <jordigh@octave.org> |
---|---|
date | Sun, 27 Nov 2011 15:58:14 -0500 |
parents | 882ffde0ce47 |
children |
rev | line source |
---|---|
2
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
1 function [error_train, error_val] = learningCurve(X, y, Xval, yval, lambda) |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
2 ##LEARNINGCURVE Generates the train and cross validation set errors needed |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
3 ##to plot a learning curve |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
4 ## [error_train, error_val] = ... |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
5 ## LEARNINGCURVE(X, y, Xval, yval, lambda) returns the train and |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
6 ## cross validation set errors for a learning curve. In particular, |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
7 ## it returns two vectors of the same length - error_train and |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
8 ## error_val. Then, error_train(i) contains the training error for |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
9 ## i examples (and similarly for error_val(i)). |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
10 ## |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
11 ## In this function, you will compute the train and test errors for |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
12 ## dataset sizes from 1 up to m. In practice, when working with larger |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
13 ## datasets, you might want to do this in larger intervals. |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
14 ## |
0 | 15 |
2
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
16 ## Number of training examples |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
17 m = rows (X); |
0 | 18 |
2
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
19 ## Initialise outputs |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
20 error_train = zeros(m, 1); |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
21 error_val = zeros(m, 1); |
5
eddd33e57f6a
Justify loop in trainLinearReg
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
2
diff
changeset
|
22 |
eddd33e57f6a
Justify loop in trainLinearReg
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
2
diff
changeset
|
23 ## It is not worth getting rid of this loop because the complexity is |
eddd33e57f6a
Justify loop in trainLinearReg
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
2
diff
changeset
|
24 ## inside trainLinearReg which in turn is inside fmincg. While you |
eddd33e57f6a
Justify loop in trainLinearReg
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
2
diff
changeset
|
25 ## *could* do some matrix gymnastics to perform a single minimisation |
eddd33e57f6a
Justify loop in trainLinearReg
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
2
diff
changeset
|
26 ## in a much higher dimensional space instead of m minimimsations, the |
eddd33e57f6a
Justify loop in trainLinearReg
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
2
diff
changeset
|
27 ## effort is unlikely to produce faster code. |
2
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
28 for i = 1:m |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
29 Xtrain = X(1:i, :); |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
30 ytrain = y(1:i); |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
31 theta = trainLinearReg (Xtrain, ytrain, lambda); |
0 | 32 |
2
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
33 error_train(i) = sumsq (Xtrain*theta - ytrain)/(2*i); |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
34 error_val(i) = sumsq (Xval*theta - yval)/(2*length (yval)); |
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
35 endfor |
0 | 36 |
2
882ffde0ce47
Implement learningCurve
Jordi Gutiérrez Hermoso <jordigh@octave.org>
parents:
0
diff
changeset
|
37 endfunction |