Content deleted Content added
No edit summary |
→Pseudoinverse solution for the linear weights: i don't agree with unique local minimum (there can be a affine space of equivalent solutions) |
||
Line 220:
====Pseudoinverse solution for the linear weights====
After the centers <math>c_i</math> have been fixed, the weights that minimize the error at the output
:<math>\mathbf{w} = \mathbf{G}^+ \mathbf{b}</math>,
where the entries of ''G'' are the values of the radial basis functions evaluated at the points <math>x_i</math>: <math>g_{ji} = \rho(||x_j-c_i||)</math>.
The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have
====Gradient descent training of the linear weights====
|