Radial basis function network: Difference between revisions

Content deleted Content added
Line 42:
There is theoretical justification for this architecture in the case of stochastic data flow. Assume a [[stochastic kernel]] approximation for the joint probability density
 
:<math> P\left ( \mathbf{x} \land y \right ) = {1 \over N} \sum_{i=1}^N \, \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) \, \sigma \big ( \left \vert y - e_i \right \vert \big )</math>
 
where the weights <math> \mathbf{c}_i </math> and <math> e_i </math> are exemplars from the data and we require the kernels to be normalized
Line 53:
The probability densities in the input and output spaces are
 
:<math> P \left ( \mathbf{x} \right ) = \int P \left ( \mathbf{x} \land y \right ) \, dy = {1 \over N} \sum_{i=1}^N \, \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big )</math>
 
and
 
:<math> P \left ( y \right ) = \int P \left ( \mathbf{x} \land y \right ) \, d^n \mathbf{x} = {1 \over N} \sum_{i=1}^N \, \sigma \big ( \left \vert y - e_i \right \vert \big ) </math>
 
The expectation of y given an input <math> \mathbf{x} </math> is