Radial basis function network: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Altered last2. | Use this bot. Report bugs. | Suggested by Headbomb | Category:CS1 maint: extra punctuation | #UCB_Category 3/5
Link suggestions feature: 3 links added.
 
(9 intermediate revisions by 9 users not shown)
Line 24:
|pages = 321–355
|url = https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf
|access-date = 2019-01-29
|archive-date = 2020-12-01
|archive-url = https://web.archive.org/web/20201201121028/https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf
|url-status = live
}}</ref><ref name="schwenker"/>
 
Line 41 ⟶ 45:
}}</ref><ref>{{cite conference
|conference=Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society
|conference-url=https://ieeexplore.ieee.org/servletxpl/conhome/opac?punumber=8844528/proceeding
|___location=Houston, TX, USA
|last1=Ibrikci|first1=Turgay
Line 53 ⟶ 57:
|doi=10.1109/IEMBS.2002.1053230
|title=Mahalanobis distance with radial basis function network on protein secondary structures
|isbn=0-7803-7612-9
|journal = Engineering in Medicine and Biology Society, Proceedings of the Annual International Conference of the IEEE|isbn=0-7803-7612-9
|issn=1094-687X
}}</ref>{{Editorializing|date=May 2020}}<!-- Was previously marked with a missing-citation tag asking in what sense using Mahalanobis distance is better and why the Euclidean distance is still normally used, but I found sources to support the first part, so it's likely salvageable. -->) and the radial basis function is commonly taken to be [[Normal distribution|Gaussian]]
Line 65 ⟶ 69:
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
 
Given certain mild conditions on the shape of the activation function, RBF networks are [[universal approximator]]s on a [[Compact space|compact]] subset of <math>\mathbb{R}^n</math>.<ref name="Park">{{cite journal|last=Park|first=J.|author2=I. W. Sandberg|s2cid=34868087|date=Summer 1991|title=Universal Approximation Using Radial-Basis-Function Networks|journal=Neural Computation|volume=3|issue=2|pages=246–257|doi=10.1162/neco.1991.3.2.246|pmid=31167308}}</ref> This means that an RBF network with enough hidden neurons can approximate any [[continuous function]] on a closed, bounded set with arbitrary precision.
 
The parameters <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \beta_i </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.
Line 72 ⟶ 76:
]]
 
===NormalizedNormalization===
{{multiple images
| align = right
Line 119 ⟶ 123:
:<math> P\left ( y \mid \mathbf{x} \right ) </math>
is the conditional probability of y given <math> \mathbf{x} </math>.
The conditional probability is related to the joint probability through [[Bayes' theorem]]
 
:<math> P\left ( y \mid \mathbf{x} \right ) = \frac {P \left ( \mathbf{x} \land y \right )} {P \left ( \mathbf{x} \right )} </math>
Line 155 ⟶ 159:
 
:<math> v_{ij}\big ( \mathbf{x} - \mathbf{c}_i \big ) \ \stackrel{\mathrm{def}}{=}\ \begin{cases} \delta_{ij} \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if } i \in [1,N] \\ \left ( x_{ij} - c_{ij} \right ) \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if }i \in [N+1,2N] \end{cases} </math>
in the unnormalized case and in the normalized case.
 
in the unnormalized case and
 
:<math> v_{ij}\big ( \mathbf{x} - \mathbf{c}_i \big ) \ \stackrel{\mathrm{def}}{=}\ \begin{cases} \delta_{ij} u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if } i \in [1,N] \\ \left ( x_{ij} - c_{ij} \right ) u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if }i \in [N+1,2N] \end{cases} </math>
 
in the normalized case.
 
Here <math> \delta_{ij} </math> is a [[Kronecker delta function]] defined as
 
:<math> \delta_{ij} = \begin{cases} 1, & \mbox{if }i = j \\ 0, & \mbox{if }i \ne j \end{cases} </math>.
 
Line 231 ⟶ 228:
\end{matrix} \right]</math>
 
It can be shown that the interpolation matrix in the above equation is non-singular, if the points <math>\mathbf x_i</math> are distinct, and thus the weights <math>w</math> can be solved by simple [[linear algebra]]:
:<math>\mathbf{w} = \mathbf{G}^{-1} \mathbf{b}</math>
where <math>G = (g_{ij})</math>.
Line 299 ⟶ 296:
===Logistic map===
 
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the [[logistic map]], which maps the [[unit interval]] onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore [[function approximation]], [[time series prediction]], and [[control theory]]. The map originated from the field of [[population dynamics]] and became the prototype for [[chaos theory|chaotic]] time series. The map, in the fully chaotic regime, is given by
 
:<math> x(t+1)\ \stackrel{\mathrm{def}}{=}\ f\left [ x(t)\right ] = 4 x(t) \left [ 1-x(t) \right ] </math>
Line 401 ⟶ 398:
* [[Cerebellar model articulation controller]]
* [[Instantaneously trained neural networks]]
* [[Support vector machine]]
 
==References==
Line 408 ⟶ 406:
* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see [https://web.archive.org/web/20070302175857/http://www.ki.inf.tu-dresden.de/~fritzke/FuzzyPaper/node5.html Radial basis function networks according to Moody and Darken]
* T. Poggio and F. Girosi, "[http://courses.cs.tamu.edu/rgutier/cpsc636_s10/poggio1990rbf2.pdf Networks for approximation and learning]," Proc. IEEE 78(9), 1484-1487 (1990).
* [[Roger Jones (physicist and entrepreneur)|Roger D. Jones]], Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, [httphttps://ieeexplore.ieee.org/xplXplore/freeabs_allhome.jsp?arnumber;jsessionid=1376441BAA8854614AFC21D2C29CDB4FC7DBEB Function approximation and time series prediction with neural networks], Proceedings of the International Joint Conference on Neural Networks, June 17–21, p.&nbsp;I-649 (1990).
* {{cite book | author=Martin D. Buhmann | title=Radial Basis Functions: Theory and Implementations | publisher= Cambridge University| year=2003 | isbn=0-521-63338-9}}
* {{cite book |author1=Yee, Paul V. |author2=Haykin, Simon |name-list-style=amp | title=Regularized Radial Basis Function Networks: Theory and Applications | publisher= John Wiley| year=2001 | isbn=0-471-35349-3}}
Line 420 ⟶ 418:
[[Category:Machine learning algorithms]]
[[Category:Regression analysis]]
[[Category:1988 in artificial intelligence]]