Radial basis function network: Difference between revisions

Content deleted Content added
Replaced image by vectored version
Link suggestions feature: 3 links added.
 
(134 intermediate revisions by 85 users not shown)
Line 1:
A{{short '''radialdescription|Type basisof function network''' is an [[artificial neural network]] that uses [[radial basis function]]sfunctions as activation functions. It is a [[linear combination]] of radial basis functions. They are used in [[function approximation]], [[time series prediction]], and [[Control theory|control]].}}
In the field of [[mathematical modeling]], a '''radial basis function network''' is an [[artificial neural network]] that uses [[radial basis function]]s as [[activation function]]s. The output of the network is a [[linear combination]] of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including [[function approximation]], [[time series prediction]], [[Statistical classification|classification]], and system [[Control theory|control]]. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the [[Royal Signals and Radar Establishment]].<ref>{{cite tech report
|last1 = Broomhead
|first1 = D. S.
|last2 = Lowe
|first2 = David
|year = 1988
|title = Radial basis functions, multi-variable functional interpolation and adaptive networks
|institution = [[Royal Signals and Radar Establishment|RSRE]]
|number = 4148
|url = http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA196234
|archive-url = https://web.archive.org/web/20130409223044/http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA196234
|url-status = dead
|archive-date = April 9, 2013
}}</ref><ref>{{cite journal
|last1 = Broomhead
|first1 = D. S.
|last2 = Lowe
|first2 = David
|year = 1988
|title = Multivariable functional interpolation and adaptive networks
|journal = Complex Systems
|volume = 2
|pages = 321–355
|url = https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf
|access-date = 2019-01-29
|archive-date = 2020-12-01
|archive-url = https://web.archive.org/web/20201201121028/https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf
|url-status = live
}}</ref><ref name="schwenker"/>
 
==Network architecture==
[[File:Rbf-network.svg|thumb|252x252px|Architecture of a radial basis function network. An input vector <math>x</math> is used as input to all radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.]]
 
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers <math>\mathbf{x} \in \mathbb{R}^n</math>. The output of the network is then a scalar function of the input vector, <math> \varphi : \mathbb{R}^n \to \mathbb{R} </math>, and is given by
[[Image:Radial funktion network.svg|thumb|350px|right|Figure 1: Architecture of a radial basis function network. An input vector '''x''' is used as input to all radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.]]
 
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The output, <math> \varphi : \mathbb{R}^n \to \mathbb{R} </math>, of the network is thus
 
:<math>\varphi(\mathbf{x}) = \sum_{i=1}^N a_i \rho(||\mathbf{x}-\mathbf{c}_i||)</math>
 
where ''<math>N''</math> is the number of neurons in the hidden layer, <math>\mathbf c_i</math> is the center vector for neuron ''<math>i''</math>, and <math>a_i</math> areis the weightsweight of neuron <math>i</math> in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The [[Norm (mathematics)|norm]] is typically taken to be the [[Euclidean distance]] and(although the basis[[Mahalanobis functiondistance]] is takenappears to beperform [[Normalbetter with distribution|Gaussian]]pattern recognition<ref>{{cite web
|last1=Beheim|first1=Larbi
|last2=Zitouni|first2=Adel
|last3=Belloir|first3=Fabien
|date=January 2004
|title=New RBF neural network classifier with optimized hidden neurons number
|url=https://www.researchgate.net/publication/254467552
}}</ref><ref>{{cite conference
|conference=Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society
|conference-url=https://ieeexplore.ieee.org/xpl/conhome/8844528/proceeding
|___location=Houston, TX, USA
|last1=Ibrikci|first1=Turgay
|last2=Brandt|first2=M.E.
|last3=Wang|first3=Guanyu
|last4=Acikkar|first4=Mustafa
|date=23–26 October 2002
|publication-date=6 January 2003
|volume=3
|pages=2184–5
|doi=10.1109/IEMBS.2002.1053230
|title=Mahalanobis distance with radial basis function network on protein secondary structures
|isbn=0-7803-7612-9
|issn=1094-687X
}}</ref>{{Editorializing|date=May 2020}}<!-- Was previously marked with a missing-citation tag asking in what sense using Mahalanobis distance is better and why the Euclidean distance is still normally used, but I found sources to support the first part, so it's likely salvageable. -->) and the radial basis function is commonly taken to be [[Normal distribution|Gaussian]]
 
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) = \exp \left[ -\beta_i \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] </math>.
 
The Gaussian basis functions are local to the center vector in the sense that
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) = \exp \left[ -\beta \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] </math>.
 
 
The Gaussian basis functions are local in the sense that
 
 
:<math>\lim_{||x|| \to \infty}\rho(\left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert) = 0</math>
Line 22 ⟶ 69:
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
 
Given certain mild conditions on the shape of the activation function, RBF networks are [[universal approximator]]s on a [[Compact space|compact]] subset of <math>\mathbb{R}^n</math>.<ref name="Park">{{cite journal|last=Park|first=J.|author2=I. W. Sandberg|s2cid=34868087|date=Summer 1991|title=Universal Approximation Using Radial-Basis-Function Networks|journal=Neural Computation|volume=3|issue=2|pages=246–257|doi=10.1162/neco.1991.3.2.246|pmid=31167308}}</ref> This means that aan RBF network with enough hidden neurons can approximate any [[continuous function]] on a closed, bounded set with arbitrary precision.
 
The weightsparameters <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \betabeta_i </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.
 
[[Image:Unnormalized radial basis functions.svg|thumb|350px250px|right|Figure 2: Two unnormalized radial basis functions in one input dimension. The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.
]]
 
===NormalizedNormalization===
{{multiple images
| align = right
| direction = vertical
| width = 250
| image1 = Normalized radial basis functions.svg
| caption1 = Two normalized radial basis functions in one input dimension ([[logistic function|sigmoids]]). The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.
| image2 = 3 Normalized radial basis functions.svg
| caption2 = Three normalized radial basis functions in one input dimension. The additional basis function has center at <math> c_3=2.75 </math>.
| image3 = 4 Normalized radial basis functions.svg
| caption3 = Four normalized radial basis functions in one input dimension. The fourth basis function has center at <math> c_4=0 </math>. Note that the first basis function (dark blue) has become localized.
}}
 
====Normalized architecture====
In addition to the above ''unnormalized'' architecture, RBF networks can be ''normalized''. In this case the mapping is
Line 36 ⟶ 95:
where
 
:<math> u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) \ \stackrel{\mathrm{def}}{=}\ \frac { \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) } { \sum_{ij=1}^N \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i_j \right \Vert \big ) } </math>
 
is known as a "normalized radial basis function".
 
[[Image:060803 normalized radial basis functions.png|thumb|350px|right|Figure 3: Two normalized radial basis functions in one input dimension. The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.]]
 
is known as a ''normalized radial basis function''.
 
====Theoretical motivation for normalization====
Line 52 ⟶ 108:
and
:<math> \int \sigma \big ( \left \vert y - e_i \right \vert \big ) \, dy =1</math>.
 
[[Image:060804 3 normalized basis functions.png|thumb|350px|right|Figure 4: Three normalized radial basis functions in one input dimension. The additional basis function has center at <math> c_3=2.75 </math>]]
 
The probability densities in the input and output spaces are
Line 69 ⟶ 123:
:<math> P\left ( y \mid \mathbf{x} \right ) </math>
is the conditional probability of y given <math> \mathbf{x} </math>.
The conditional probability is related to the joint probability through [[Bayes' theorem]]
 
:<math> P\left ( y \mid \mathbf{x} \right ) = \frac {P \left ( \mathbf{x} \land y \right )} {P \left ( \mathbf{x} \right )} </math>
Line 82 ⟶ 136:
 
when the integrations are performed.
[[Image:060804 4 normalized basis functions.png|thumb|350px|right|Figure 5: Four normalized radial basis functions in one input dimension. The fourth basis function has center at <math> c_4=0 </math>. Note that the first basis function (dark blue) has become localized.]]
 
===Local linear models===
Line 89 ⟶ 142:
:<math> \varphi \left ( \mathbf{x} \right ) = \sum_{i=1}^N \left ( a_i + \mathbf{b}_i \cdot \left ( \mathbf{x} - \mathbf{c}_i \right ) \right )\rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
 
and
 
:<math> \varphi \left ( \mathbf{x} \right ) = \sum_{i=1}^N \left ( a_i + \mathbf{b}_i \cdot \left ( \mathbf{x} - \mathbf{c}_i \right ) \right )u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
Line 106 ⟶ 159:
 
:<math> v_{ij}\big ( \mathbf{x} - \mathbf{c}_i \big ) \ \stackrel{\mathrm{def}}{=}\ \begin{cases} \delta_{ij} \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if } i \in [1,N] \\ \left ( x_{ij} - c_{ij} \right ) \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if }i \in [N+1,2N] \end{cases} </math>
in the unnormalized case and in the normalized case.
 
in the unnormalized case and
 
:<math> v_{ij}\big ( \mathbf{x} - \mathbf{c}_i \big ) \ \stackrel{\mathrm{def}}{=}\ \begin{cases} \delta_{ij} u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if } i \in [1,N] \\ \left ( x_{ij} - c_{ij} \right ) u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if }i \in [N+1,2N] \end{cases} </math>
 
in the normalized case.
 
Here <math> \delta_{ij} </math> is a [[Kronecker delta function]] defined as
 
:<math> \delta_{ij} = \begin{cases} 1, & \mbox{if }i = j \\ 0, & \mbox{if }i \ne j \end{cases} </math>.
 
==Training==
 
RBF networks are typically trained from pairs of input and target values <math>\mathbf{x}(t), y(t)</math>, <math>t = 1, \dots, T</math> by a two-step algorithm.
In a RBF network there are three types of parameters that need to be chosen to adapt the network for a particular task: the center vectors <math>\mathbf c_i</math>, the output weights <math>w_i</math>, and the RBF width parameters <math>\beta_i</math>. In the sequential training of the weights are updated at each time step as data streams in.
 
In the first step, the center vectors <math>\mathbf c_i</math> of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using [[k-means clustering]]. Note that this step is [[unsupervised learning|unsupervised]].
For some tasks it makes sense to define an objective function and select the parameter values that minimize its value. The most common objective function is the least squares function
 
The second step simply fits a linear model with coefficients <math>w_i</math> to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function:
:<math> K( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^\infty K_t( \mathbf{w} ) </math>
 
:<math> K( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^T K_t( \mathbf{w} ) </math>
where
:<math> K_t( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \big [ y(t) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ]^2 </math>.
We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.
 
There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as
 
:<math> H( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ K( \mathbf{w} ) + \lambda S( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^\inftyT H_t( \mathbf{w} ) </math>
 
where
 
:<math> S( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^\inftyT S_t( \mathbf{w} ) </math>
 
and
Line 141 ⟶ 189:
 
where optimization of S maximizes smoothness and <math> \lambda </math> is known as a [[regularization (machine learning)|regularization]] parameter.
 
A third optional [[backpropagation]] step can be performed to fine-tune all of the RBF net's parameters.<ref name="schwenker">{{cite journal
|last1 = Schwenker
|first1 = Friedhelm
|last2 = Kestler
|first2 = Hans A.
|last3 = Palm
|first3 = Günther
|title = Three learning phases for radial-basis-function networks
|journal = Neural Networks
|volume = 14
|issue = 4–5
|pages = 439–458
|year = 2001
|citeseerx = 10.1.1.109.312
|doi=10.1016/s0893-6080(01)00027-2
|pmid = 11411631
}}</ref>
 
===Interpolation===
Line 162 ⟶ 228:
\end{matrix} \right]</math>
 
It can be shown that the interpolation matrix in the above equation is non-singular, if the points <math>\mathbf x_i</math> are distinct, and thus the weights <math>w</math> can be solved by simple [[linear algebra]]:
:<math>\mathbf{w} = \mathbf{G}^{-1} \mathbf{b}</math>
where <math>G = (g_{ij})</math>.
 
===Function approximation===
Line 177 ⟶ 244:
====Pseudoinverse solution for the linear weights====
 
After the centers <math>c_i</math> have been fixed, the weights that minimize the error at the output arecan be computed with a linear [[pseudoinverse]] solution:
:<math>\mathbf{w} = \mathbf{G}^+ \mathbf{b}</math>,
where the entries of ''G'' are the values of the radial basis functions evaluated at the points <math>x_i</math>: <math>g_{ji} = \rho(||x_j-c_i||)</math>.
 
The existence of this linear solution means that unlike [[Artificial neural network#Multimulti-layer perceptron|Multi-Layer Perceptron (MLP) networks]] the, RBF networks have a uniquean localexplicit minimumminimizer (when the centers are fixed).
 
====Gradient descent training of the linear weights====
 
Another possible training algorithm is [[gradient descent]]. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found),
 
:<math> \mathbf{w}(t+1) = \mathbf{w}(t) - \nu \frac {d} {d\mathbf{w}} H_t(\mathbf{w}) </math>
Line 226 ⟶ 293:
 
==Examples==
 
===Logistic map===
 
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the [[logistic map]], which maps the [[unit interval]] onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore [[function approximation]], [[time series prediction]], and [[control theory]]. The map originated from the field of [[population dynamics]] and became the prototype for [[chaos theory|chaotic]] time series. The map, in the fully chaotic regime, is given by
 
:<math> x(t+1)\ \stackrel{\mathrm{def}}{=}\ f\left [ x(t)\right ] = 4 x(t) \left [ 1-x(t) \right ] </math>
Line 234 ⟶ 302:
where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.
 
Generation of the time series from this equation is the [[forward problem]]. The examples here illustrate the [[inverse problem]]; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate
 
:<math> x(t+1) = f \left [ x(t) \right ] \approx \varphi(t) = \varphi \left [ x(t)\right ] </math>
Line 241 ⟶ 309:
 
===Function approximation===
 
====Unnormalized radial basis functions====
 
Line 248 ⟶ 317:
:<math> \varphi ( \mathbf{x} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{i=1}^N a_i \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
 
where
 
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) = \exp \left[ -\betabeta_i \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] = \exp \left[ -\betabeta_i \left ( x(t) - c_i \right ) ^2 \right] </math>.
 
Since the input is a [[Scalar (mathematics)|scalar]] rather than a [[Vector (geometric)|vector]], the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight <math> \beta </math> is taken to be a constant equal to 5. The weights <math> c_i </math> are five exemplars from the time series. The weights <math> a_i </math> are trained with projection operator training:
Line 256 ⟶ 325:
:<math> a_i (t+1) = a_i(t) + \nu \big [ x(t+1) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] \frac {\rho \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} {\sum_{i=1}^N \rho^2 \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} </math>
 
where the [[learning rate]] <math> \nu </math> is taken to be 0.3. The training is performed with one pass through the 100 training points. The [[Mean squared error|rms error]] is 0.15.
 
[[Image:Normalized basis functions.png|thumb|350px|right|Figure 8: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. Note the improvement over the unnormalized case.]]
Line 277 ⟶ 346:
:<math> a_i (t+1) = a_i(t) + \nu \big [ x(t+1) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] \frac {u \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} {\sum_{i=1}^N u^2 \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} </math>
 
where the [[learning rate]] <math> \nu </math> is again taken to be 0.3. The training is performed with one pass through the 100 training points. The [[Mean squared error|rms error]] on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.
 
[[File:Chaotic Time Series Prediction.svg|thumb|350px|right|Figure 9: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) as a function of time. Note that the approximation is good for only a few time steps. This is a general characterisitccharacteristic of chaotic time series.]]
 
===Time series prediction===
Line 291 ⟶ 360:
:<math> {x}(t+1) \approx \varphi(t)=\varphi [\varphi(t-1)]</math>.
 
A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps.
 
Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the [[Lyapunov exponent]].
Line 297 ⟶ 366:
===Control of a chaotic time series===
 
[[ImageFile:060808 control of logistic map.pngsvg|thumb|350px|right|Figure 10: Control of the logistic map. The system is allowed to evolve naturally for 49 time steps. At time 50 control is turned on. The desired trajectory for the time series is red. The system under control learns the underlying dynamics and drives the time series to the desired output. The architecture is the same as for the time series prediction example.]]
We assume the output of the logistic map can be manipulated through a control parameter <math> c[ x(t),t] </math> such that
 
:<math> {x}^{ }_{ }(t+1) = 4 x(t) [1-x(t)] +c[x(t),t] </math>.
 
The goal is to choose the control parameter in such a way as to drive the time series to a desired output <math> d(t) </math>. This can be done if we choose the control paramerparameter to be
 
:<math> c^{ }_{ }[x(t),t] \ \stackrel{\mathrm{def}}{=}\ -\varphi [x(t)] + d(t+1) </math>
 
where
 
:<math> y[x(t)] \approx f[x(t)] = x(t+1)- c[x(t),t] </math>
Line 321 ⟶ 390:
 
==See also==
* [[Radial basis function kernel]]
 
* [[instance-based learning]]
* [[In Situ Adaptive Tabulation]]
* [[Predictive analytics]]
* [[Chaos theory]]
* [[Hierarchical RBF]]
* [[Cerebellar model articulation controller]]
* [[Instantaneously trained neural networks]]
* [[Support vector machine]]
 
==References==
{{reflist}}
* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see [http://www.ki.inf.tu-dresden.de/~fritzke/FuzzyPaper/node5.html Radial basis function networks according to Moody and Darken]
 
* T. Poggio and F. Girosi, "Networks for approximation and learning," Proc. IEEE 78(9), 1484-1487 (1990).
==Further reading==
* [[Roger Jones (physicist and entrepreneur)| Roger D. Jones]], Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, ?[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=137644 Function approximation and time series prediction with neural networks],? Proceedings of the International Joint Conference on Neural Networks, June 17-21, p. I-649 (1990).
* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see [https://web.archive.org/web/20070302175857/http://www.ki.inf.tu-dresden.de/~fritzke/FuzzyPaper/node5.html Radial basis function networks according to Moody and Darken]
* {{cite book | author=Martin D. Buhmann | title=Radial Basis Functions: Theory and Implementations | publisher= Cambridge University| year=2003 | id=ISBN 0-521-63338-9}}
* T. Poggio and F. Girosi, "[http://courses.cs.tamu.edu/rgutier/cpsc636_s10/poggio1990rbf2.pdf Networks for approximation and learning]," Proc. IEEE 78(9), 1484-1487 (1990).
* {{cite book | author=Yee, Paul V. and Haykin, Simon | title=Regularized Radial Basis Function Networks: Theory and Applications | publisher= John Wiley| year=2001 | id=ISBN 0-471-35349-3}}
* Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, [https://ieeexplore.ieee.org/Xplore/home.jsp;jsessionid=1BAA8854614AFC21D2C29CDB4FC7DBEB Function approximation and time series prediction with neural networks], Proceedings of the International Joint Conference on Neural Networks, June 17–21, p.&nbsp;I-649 (1990).
* John R. Davies, Stephen V. Coggeshall, [[Roger Jones (physicist and entrepreneur)| Roger D. Jones]], and Daniel Schutzer, "Intelligent Security Systems," in {{cite book | author=Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors | title=Artificial Intelligence in the Capital Markets | ___location= Chicago | publisher=Irwin| year=1995 | id=ISBN 1-55738-811-3}}
* {{cite book | author=SimonMartin HaykinD. Buhmann | title=NeuralRadial NetworksBasis Functions: ATheory Comprehensiveand FoundationImplementations | editionpublisher=2nd editionCambridge | ___location=Upper Saddle River, NJ | publisher=Prentice HallUniversity| year=19992003 | idisbn=ISBN 0-13521-90838563338-59}}
* S.{{cite Chenbook |author1=Yee, C.Paul FV. N. Cowan|author2=Haykin, andSimon P. M. Grant, "Orthogonal Least|name-list-style=amp Squares| Learning Algorithm fortitle=Regularized Radial Basis Function Networks",: IEEETheory Transactionsand onApplications Neural| Networks,publisher= VolJohn 2,Wiley| Noyear=2001 2| (Mar) 1991.isbn=0-471-35349-3}}
* {{cite book|first1=John R.|last1=Davies|first2=Stephen V.|last2=Coggeshall |author3-link=Roger Jones (physicist and entrepreneur)|first3=Roger D.|last3=Jones|first4= Daniel|last4=Schutzer|contribution=Intelligent Security Systems|editor1-last=Freedman|editor1-first=Roy S.|editor2-last= Flein|editor2-first= Robert A.|editor3-last= Lederman|editor3-first= Jess | title=Artificial Intelligence in the Capital Markets | ___location= Chicago | publisher=Irwin| year=1995 | isbn=1-55738-811-3}}
* {{cite book | author=Simon Haykin | title=Neural Networks: A Comprehensive Foundation | edition=2nd | ___location=Upper Saddle River, NJ | publisher=Prentice Hall| year=1999 | isbn=0-13-908385-5}}
* S. Chen, C. F. N. Cowan, and P. M. Grant, "[https://eprints.soton.ac.uk/251135/1/00080341.pdf Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks]", IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991.
 
[[Category:Neural networksnetwork architectures]]
[[Category:Interpolation]]
[[Category:Machine learning]]
[[Category:Computational statistics]]
[[Category:Classification algorithms]]
[[Category:OptimizationMachine learning algorithms]]
[[Category:Regression analysis]]
[[Category:1988 in artificial intelligence]]