Content deleted Content added
adding links to references using Google Scholar |
m →Function approximation: linked learning rate |
||
Line 302:
:<math> a_i (t+1) = a_i(t) + \nu \big [ x(t+1) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] \frac {\rho \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} {\sum_{i=1}^N \rho^2 \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} </math>
where the [[learning rate]] <math> \nu </math> is taken to be 0.3. The training is performed with one pass through the 100 training points. The [[Mean squared error|rms error]] is 0.15.
[[Image:Normalized basis functions.png|thumb|350px|right|Figure 8: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. Note the improvement over the unnormalized case.]]
Line 323:
:<math> a_i (t+1) = a_i(t) + \nu \big [ x(t+1) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] \frac {u \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} {\sum_{i=1}^N u^2 \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} </math>
where the [[learning rate]] <math> \nu </math> is again taken to be 0.3. The training is performed with one pass through the 100 training points. The [[Mean squared error|rms error]] on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.
[[File:Chaotic Time Series Prediction.svg|thumb|350px|right|Figure 9: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) as a function of time. Note that the approximation is good for only a few time steps. This is a general characteristic of chaotic time series.]]
|