Radial basis function network: Difference between revisions

Content deleted Content added
See also: correction to previous edit - see also "instance-based learning" - RBFs are an example of this (along with k-nearest neighbour)
Impinball (talk | contribs)
Network architecture: Add sources for a claim and narrow it down, add a tag to call some attention to it.
Line 31:
:<math>\varphi(\mathbf{x}) = \sum_{i=1}^N a_i \rho(||\mathbf{x}-\mathbf{c}_i||)</math>
 
where <math>N</math> is the number of neurons in the hidden layer, <math>\mathbf c_i</math> is the center vector for neuron <math>i</math>, and <math>a_i</math> is the weight of neuron <math>i</math> in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form all inputs are connected to each hidden neuron. The [[Norm (mathematics)|norm]] is typically taken to be the [[Euclidean distance]] (although the [[Mahalanobis distance]] appears to perform better inwith generalpattern recognition<ref>{{citation needed|reason=better in which sense? and if so, why is tipically taken the Euclidean?|date=September 2014}}) and the radial basis function is commonly taken to be [[Normalcite distribution|Gaussian]]citeseerx
|last1=Beheim|first1=Larbi
|last2=Zitouni|first2=Adel
|last3=Belloir|first3=Fabien
|date=January 2004
|title=New RBF neural network classifier with optimized hidden neurons number
|citeseerx=10.1.1.497.5646
}}</ref><ref>{{cite conference
|journal=IEEE Xplore
|conference=Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society
|conference-url=https://ieeexplore.ieee.org/servlet/opac?punumber=8844528
|___location=Houston, TX, USA
|last1=Ibrikci|first1=Turgay
|last2=Brandt|first2=M.E.
|last3=Wang|first3=Guanyu
|last4=Acikkar|first4=Mustafa
|date=23-26 October 2002
|publication-date=6 January 2003
|volume=3
|pages=2184-2185
|doi=10.1109/IEMBS.2002.1053230
|url=https://ieeexplore.ieee.org/document/1053230/?arnumber=1053230
|url-access=subscription
|access-date=25 May 2020
|title=Mahalanobis distance with radial basis function network on protein secondary structures
|isbn=0-7803-7612-9
|issn=1094-687X
}}</ref>{{Editorializing|date=May 2020}}<!-- Was previously marked with a "citation needed" asking in what sense using Mahalanobis distance is better and why the Euclidean distance is still normally used, but I found sources to support the first part, so it's likely salvageable. -->) and the radial basis function is commonly taken to be [[Normal distribution|Gaussian]]
 
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) = \exp \left[ -\beta \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] </math>.