Radial basis function network: Difference between revisions

Content deleted Content added
AnAj (talk | contribs)
split of Radial basis function, copy of the original article
 
Link suggestions feature: 3 links added.
 
(179 intermediate revisions by more than 100 users not shown)
Line 1:
A{{short '''radial basis function''' (RBF) is a real-valued function whose value depends only on the distance from the [[Origin (mathematics)description|origin]].Type Theyof are used in [[function approximation]], [[time series prediction]], and [[Control theory|control]]. In [[artificial neural network]]s that uses radial basis functions are utilized as activation functions.}}
In the field of [[mathematical modeling]], a '''radial basis function network''' is an [[artificial neural network]] that uses [[radial basis function]]s as [[activation function]]s. The output of the network is a [[linear combination]] of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including [[function approximation]], [[time series prediction]], [[Statistical classification|classification]], and system [[Control theory|control]]. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the [[Royal Signals and Radar Establishment]].<ref>{{cite tech report
|last1 = Broomhead
|first1 = D. S.
|last2 = Lowe
|first2 = David
|year = 1988
|title = Radial basis functions, multi-variable functional interpolation and adaptive networks
|institution = [[Royal Signals and Radar Establishment|RSRE]]
|number = 4148
|url = http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA196234
|archive-url = https://web.archive.org/web/20130409223044/http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA196234
|url-status = dead
|archive-date = April 9, 2013
}}</ref><ref>{{cite journal
|last1 = Broomhead
|first1 = D. S.
|last2 = Lowe
|first2 = David
|year = 1988
|title = Multivariable functional interpolation and adaptive networks
|journal = Complex Systems
|volume = 2
|pages = 321–355
|url = https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf
|access-date = 2019-01-29
|archive-date = 2020-12-01
|archive-url = https://web.archive.org/web/20201201121028/https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf
|url-status = live
}}</ref><ref name="schwenker"/>
 
==Network architecture==
==Interpolation problem==
[[File:Rbf-network.svg|thumb|252x252px|Architecture of a radial basis function network. An input vector <math>x</math> is used as input to all radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.]]
Radial basis functions can be used to solve a function interpolation problem. A set of known input-output pairs values is needed:
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers <math>\mathbf{x} \in \mathbb{R}^n</math>. The output of the network is then a scalar function of the input vector, <math> \varphi : \mathbb{R}^n \to \mathbb{R} </math>, and is given by
:<math> \left \{ \left[ \mathbf{x}(t) , y(t) \right] : \left[ \mathbb{R}^n , \mathbb{R} \right] \right \} _{t=1}^{ K } </math>
where
:<math> \mathbf{x}(t) </math> is the input vector with index t (or the input at time t),
:<math> y(t) </math> is the output indexed with t,
:<math> n </math> is the dimension of the input space, and
:<math> K </math> is the number of points (<math>K</math> can be infinite).
 
:<math>\varphi(\mathbf{x}) = \sum_{i=1}^N a_i \rho(||\mathbf{x}-\mathbf{c}_i||)</math>
[[Image:060804 architecture.png|thumb|350px|right|Figure 1: Architecture of a radial basis function network. An input vector '''x''' is used as input to several radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.]]
 
where <math>N</math> is the number of neurons in the hidden layer, <math>\mathbf c_i</math> is the center vector for neuron <math>i</math>, and <math>a_i</math> is the weight of neuron <math>i</math> in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The [[Norm (mathematics)|norm]] is typically taken to be the [[Euclidean distance]] (although the [[Mahalanobis distance]] appears to perform better with pattern recognition<ref>{{cite web
In the [[deterministic]] case the data is drawn from the set
|last1=Beheim|first1=Larbi
:<math> \left \{ \left[ \mathbf{x}(t) , y(t) = f \big( \mathbf{x}(t) \big) \right] \right \} _{t=1}^{ K } </math>.
|last2=Zitouni|first2=Adel
The data can be noisy, in which case is drawn from the set
|last3=Belloir|first3=Fabien
:<math> \left \{ \left[ \mathbf{x}(t) , y(t) = f \big( \mathbf{x}(t) \big) + \epsilon(t) \right] \right \} _{t=1}^{ K } </math>
|date=January 2004
where <math> \epsilon(t) </math> is a partially known random process.
|title=New RBF neural network classifier with optimized hidden neurons number
|url=https://www.researchgate.net/publication/254467552
}}</ref><ref>{{cite conference
|conference=Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society
|conference-url=https://ieeexplore.ieee.org/xpl/conhome/8844528/proceeding
|___location=Houston, TX, USA
|last1=Ibrikci|first1=Turgay
|last2=Brandt|first2=M.E.
|last3=Wang|first3=Guanyu
|last4=Acikkar|first4=Mustafa
|date=23–26 October 2002
|publication-date=6 January 2003
|volume=3
|pages=2184–5
|doi=10.1109/IEMBS.2002.1053230
|title=Mahalanobis distance with radial basis function network on protein secondary structures
|isbn=0-7803-7612-9
|issn=1094-687X
}}</ref>{{Editorializing|date=May 2020}}<!-- Was previously marked with a missing-citation tag asking in what sense using Mahalanobis distance is better and why the Euclidean distance is still normally used, but I found sources to support the first part, so it's likely salvageable. -->) and the radial basis function is commonly taken to be [[Normal distribution|Gaussian]]
 
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) = \exp \left[ -\beta_i \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] </math>.
In general [[stochastic]] case, data is drawn from the joint probability distribution
:<math> P \left( \mathbf{x} \land y \right ) </math>.
 
The Gaussian basis functions are local to the center vector in the sense that
==Architecture==
 
:<math>\lim_{||x|| \to \infty}\rho(\left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert) = 0</math>
RBF networks typically have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. RBF networks have the advantage of not being locked into local minima as
do the [[Artificial_neural_network#Multi-layer_perceptron|MLP networks]]. RBF architectures come in two forms, normalized and unnormalized. The forms can be expanded into a superposition of local linear models.
 
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
=== RBF types ===
The most popular choice for the non-linearity is the Gaussian.
* Gaussian: <math>\rho(r) = \exp(-\beta r^2)</math> for some <math>\beta > 0</math>
 
Given certain mild conditions on the shape of the activation function, RBF networks are [[universal approximator]]s on a [[Compact space|compact]] subset of <math>\mathbb{R}^n</math>.<ref name="Park">{{cite journal|last=Park|first=J.|author2=I. W. Sandberg|s2cid=34868087|date=Summer 1991|title=Universal Approximation Using Radial-Basis-Function Networks|journal=Neural Computation|volume=3|issue=2|pages=246–257|doi=10.1162/neco.1991.3.2.246|pmid=31167308}}</ref> This means that an RBF network with enough hidden neurons can approximate any [[continuous function]] on a closed, bounded set with arbitrary precision.
Other forms, such as a multiquadratic are also used.
* Multiquadratic: <math>\rho(r) = \sqrt{r^2 + \beta^2}</math> for some <math>\beta > 0</math>
 
The parameters <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \beta_i </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.
[[Image:060803 unnormalized radial basis functions.png|thumb|350px|right|Figure 2: Two unnormalized radial basis functions in one input dimension. The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.
 
[[Image:Unnormalized radial basis functions.svg|thumb|250px|right|Two unnormalized radial basis functions in one input dimension. The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.
]]
===Unnormalized===
The unnormalized radial basis function architecture, <math> \varphi : \mathbb{R}^n \to \mathbb{R} </math> , is
 
===Normalization===
:<math> \varphi ( \mathbf{x} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{i=1}^N a_i \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
{{multiple images
| align = right
| direction = vertical
| width = 250
| image1 = Normalized radial basis functions.svg
| caption1 = Two normalized radial basis functions in one input dimension ([[logistic function|sigmoids]]). The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.
| image2 = 3 Normalized radial basis functions.svg
| caption2 = Three normalized radial basis functions in one input dimension. The additional basis function has center at <math> c_3=2.75 </math>.
| image3 = 4 Normalized radial basis functions.svg
| caption3 = Four normalized radial basis functions in one input dimension. The fourth basis function has center at <math> c_4=0 </math>. Note that the first basis function (dark blue) has become localized.
}}
 
where <math> \varphi </math> is the approximation to the data, <math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>, known as a "radial basis function," is a local function of the distance <math> \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert </math> between the input vector <math> \mathbf{x} </math> and a "basis function center"
 
:<math> \mathbf{c}_i </math> <math> (i=1,N) </math>,
 
and
:<math> a_i </math> <math> (i=1,N) </math>
are weights to be determined by data. Typically the distance is taken to be the [[Euclidean distance]] and the basis function is taken to be [[Normal distribution|Gaussian]]
 
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) \propto \exp \left[ -\beta \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] </math>.
 
The weights <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \beta </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.
 
[[Image:060803 normalized radial basis functions.png|thumb|350px|right|Figure 3: Two normalized radial basis functions in one input dimension. The basis function centers are located at <math> c_1=0.75 </math> and <math> c_2=3.25 </math>.]]
===Normalized===
====Normalized architecture====
In addition to the above ''unnormalized'' architecture, RBF networks can be ''normalized''. In this case the mapping is
The normalized RBF architecture is
 
:<math> \varphi ( \mathbf{x} ) \ \stackrel{\mathrm{def}}{=}\ \frac { \sum_{i=1}^N a_i \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) } { \sum_{i=1}^N \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) } = \sum_{i=1}^N a_i u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
where
 
:<math> u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) \ \stackrel{\mathrm{def}}{=}\ \frac { \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) } { \sum_{ij=1}^N \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i_j \right \Vert \big ) } </math>
 
is known as a "''normalized radial basis function''."
 
[[Image:060804 3 normalized basis functions.png|thumb|350px|right|Figure 4: Three normalized radial basis functions in one input dimension. The additional basis function has center at <math> c_3=2.75 </math> ]]
====Theoretical motivation for normalization====
There is theoretical justification for this architecture in the case of stochastic data flow. Assume a [[Stochasticstochastic kernel]] approximation for the joint probability density
 
:<math> P\left ( \mathbf{x} \land y \right ) = {1 \over N} \sum_{i=1}^N \, \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) \, \sigma \big ( \left \vert y - e_i \right \vert \big )</math>
 
where the weights <math> \mathbf{c}_i </math> and <math> e_i </math> are exemplars from the data and we require the kernels to be normalized
Line 75 ⟶ 109:
:<math> \int \sigma \big ( \left \vert y - e_i \right \vert \big ) \, dy =1</math>.
 
[[Image:060804 4 normalized basis functions.png|thumb|350px|right|Figure 5: Four normalized radial basis functions in one input dimension. The fourth basis function has center at <math> c_4=0 </math>. Note that the first basis function (dark blue) has become localized. ]]
The probability densities in the input and output spaces are
 
:<math> P \left ( \mathbf{x} \right ) = \int P \left ( \mathbf{x} \land y \right ) \, dy = {1 \over N} \sum_{i=1}^N \, \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big )</math>
 
and
 
:
:<math> P \left ( y \right ) = \int P \left ( \mathbf{x} \land y \right ) \, d^n \mathbf{x} = \sum_{i=1}^N \, \sigma \big ( \left \vert y - e_i \right \vert \big ) </math>
 
The expectation of y given an input <math> \mathbf{x} </math> is
Line 90 ⟶ 123:
:<math> P\left ( y \mid \mathbf{x} \right ) </math>
is the conditional probability of y given <math> \mathbf{x} </math>.
The conditional probability is related to the joint probability through [[Bayes' theorem]]
 
:<math> P\left ( y \mid \mathbf{x} \right ) = \frac {P \left ( \mathbf{x} \land y \right )} {P \left ( \mathbf{x} \right )} </math>
Line 100 ⟶ 133:
This becomes
 
:<math> \varphi \left ( \mathbf{x} \right ) = \frac { \sum_{i=1}^N a_ie_i \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) } { \sum_{i=1}^N \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) } = \sum_{i=1}^N a_ie_i u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
 
when the integrations are performed.
Line 109 ⟶ 142:
:<math> \varphi \left ( \mathbf{x} \right ) = \sum_{i=1}^N \left ( a_i + \mathbf{b}_i \cdot \left ( \mathbf{x} - \mathbf{c}_i \right ) \right )\rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
 
and
 
:<math> \varphi \left ( \mathbf{x} \right ) = \sum_{i=1}^N \left ( a_i + \mathbf{b}_i \cdot \left ( \mathbf{x} - \mathbf{c}_i \right ) \right )u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
Line 125 ⟶ 158:
and
 
:<math> v_{ij}\big ( \mathbf{x} - \mathbf{c}_i \big ) \ \stackrel{\mathrm{def}}{=}\ \begin{cases} \delta_{1jij} \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if } i \in [1,N] \\ \left ( x_{ij} - c_{ij} \right ) \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if }i \in [N+1,2N] \end{cases} </math>
in the unnormalized case and in the normalized case.
Here <math> \delta_{ij} </math> is a [[Kronecker delta function]] defined as
:<math> \delta_{ij} = \begin{cases} 1, & \mbox{if }i = j \\ 0, & \mbox{if }i \ne j \end{cases} </math>.
 
==Training==
in the unnormalized case and
 
RBF networks are typically trained from pairs of input and target values <math>\mathbf{x}(t), y(t)</math>, <math>t = 1, \dots, T</math> by a two-step algorithm.
:<math> v_{ij}\big ( \mathbf{x} - \mathbf{c}_i \big ) \ \stackrel{\mathrm{def}}{=}\ \begin{cases} \delta_{1j} u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if } i \in [1,N] \\ \left ( x_{ij} - c_{ij} \right ) u \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) , & \mbox{if }i \in [N+1,2N] \end{cases} </math>
 
in the normalized case.
 
Here <math> \delta_{ij} </math> is a [[Kronecker delta function]] defined as
 
:<math> \delta_{ij} = \begin{cases} 1, & \mbox{if }i = j \\ 0, & \mbox{if }i \ne j \end{cases} </math>.
 
In the first step, the center vectors <math>\mathbf c_i</math> of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using [[k-means clustering]]. Note that this step is [[unsupervised learning|unsupervised]].
==Objective functions==
 
The second step simply fits a linear model with coefficients <math>w_i</math> to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function:
{{main|Optimization (mathematics)}}
The weights, which we signify by <math> \mathbf{w} </math>, in the RBF architecture are found through optimization of an objective function. The most common objective function is the least squares function
 
:<math> K( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^\inftyT K_t( \mathbf{w} ) </math>
where
:<math> K_t( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \big [ y(t) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ]^2 </math>.
We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.
 
There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as
 
:<math> H( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ K( \mathbf{w} ) + \lambda S( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^\inftyT H_t( \mathbf{w} ) </math>
 
where
 
:<math> S( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{t=1}^\inftyT S_t( \mathbf{w} ) </math>
 
and
Line 159 ⟶ 188:
:<math> H_t( \mathbf{w} ) \ \stackrel{\mathrm{def}}{=}\ K_t ( \mathbf{w} ) + \lambda S_t ( \mathbf{w} ) </math>
 
where optimization of S maximizes smoothness and <math> \lambda </math> is known as a [[regularization (machine learning)|regularization]] parameter.
 
A third optional [[backpropagation]] step can be performed to fine-tune all of the RBF net's parameters.<ref name="schwenker">{{cite journal
==Training==
|last1 = Schwenker
|first1 = Friedhelm
|last2 = Kestler
|first2 = Hans A.
|last3 = Palm
|first3 = Günther
|title = Three learning phases for radial-basis-function networks
|journal = Neural Networks
|volume = 14
|issue = 4–5
|pages = 439–458
|year = 2001
|citeseerx = 10.1.1.109.312
|doi=10.1016/s0893-6080(01)00027-2
|pmid = 11411631
}}</ref>
 
===Interpolation===
 
RBF networks can be used to interpolate a function <math>y: \mathbb{R}^n \to \mathbb{R}</math> when the values of that function are known on finite number of points: <math>y(\mathbf x_i) = b_i, i=1, \ldots, N</math>. Taking the known points <math>\mathbf x_i</math> to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points <math>g_{ij} = \rho(|| \mathbf x_j - \mathbf x_i ||)</math> the weights can be solved from the equation
:<math>\left[ \begin{matrix}
g_{11} & g_{12} & \cdots & g_{1N} \\
g_{21} & g_{22} & \cdots & g_{2N} \\
\vdots & & \ddots & \vdots \\
g_{N1} & g_{N2} & \cdots & g_{NN}
\end{matrix}\right] \left[ \begin{matrix}
w_1 \\
w_2 \\
\vdots \\
w_N
\end{matrix} \right] = \left[ \begin{matrix}
b_1 \\
b_2 \\
\vdots \\
b_N
\end{matrix} \right]</math>
 
It can be shown that the interpolation matrix in the above equation is non-singular, if the points <math>\mathbf x_i</math> are distinct, and thus the weights <math>w</math> can be solved by simple [[linear algebra]]:
:<math>\mathbf{w} = \mathbf{G}^{-1} \mathbf{b}</math>
where <math>G = (g_{ij})</math>.
 
===Function approximation===
 
If the purpose is not to perform strict interpolation but instead more general [[function approximation]] or [[Statistical classification|classification]] the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.
 
====Training the basis function centers====
 
Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by [[data clustering|clustering]] the samples and choosing the cluster means as the centers.
 
The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.
Training the centers and weights to optimize the objective function is typically done in hybrid fashion by first fixing the basis funtion centers and then optimizing the weights. In the sequential training the weights are updated at each time step as data streams in.
 
====Pseudoinverse solution for the linear weights====
===Training the basis function centers===
 
After the centers <math>c_i</math> have been fixed, the weights that minimize the error at the output can be computed with a linear [[pseudoinverse]] solution:
Basis function centers can be either randomly sampled among the input instances or found by [[data clustering|clustering]] the samples and choosing the the cluster means as the centers.
:<math>\mathbf{w} = \mathbf{G}^+ \mathbf{b}</math>,
where the entries of ''G'' are the values of the radial basis functions evaluated at the points <math>x_i</math>: <math>g_{ji} = \rho(||x_j-c_i||)</math>.
 
The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).
===Gradient descent training of the linear weights===
 
{{main|====Gradient descent}} training of the linear weights====
 
TheAnother simplestpossible training algorithm is [[Gradientgradient descent]]. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found),
 
:<math> \mathbf{w}(t+1) = \mathbf{w}(t) - \nu \frac {d} {d\mathbf{w}} H_t(\mathbf{w}) </math>
Line 193 ⟶ 272:
:<math> e_{ij} (t+1) = e_{ij}(t) + \nu \big [ y(t) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] v_{ij} \big ( \mathbf{x}(t) - \mathbf{c}_i \big ) </math>
 
====Projection operator training of the linear weights====
 
For the case of training the linear weights, <math> a_i </math> and <math> e_{ij} </math>, the algorithm becomes
Line 211 ⟶ 290:
For one basis function, projection operator training reduces to [[Newton's method]].
 
[[Image:060731 logistic map time series 2.png|thumb|350px|right|Figure 6: Logistic map time series. Repeated iteration of the logistic map generates a chaotic time series. The values lie between zero and one. Displayed here are the 100 training points used to train the examples in this section. The weights c are the first five points from this time series. ]]
 
==Examples==
 
===Logistic map===
 
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the [[logistic map]], which maps the [[unit interval]] onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore [[function approximation]], [[time series prediction]], and [[control theory]]. The map originated from the field of [[population dynamics]] and became the prototype for [[chaos theory|chaotic]] time series. The map, in the fully chaotic regime, is given by
 
:<math> x(t+1)\ \stackrel{\mathrm{def}}{=}\ f\left [ x(t)\right ] = 4 x(t) \left [ 1-x(t) \right ] </math>
Line 222 ⟶ 302:
where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.
 
Generation of the time series from this equation is the [[forward problem]]. The examples here illustrate the [[inverse problem]]; identification of the the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate
 
:<math> x(t+1) = f \left [ x(t) \right ] \approx \varphi(t) = \varphi \left [ x(t)\right ] </math>
Line 229 ⟶ 309:
 
===Function approximation===
 
====Unnormalized radial basis functions====
 
The architecture is
 
[[Image:060728b unnormalized basis function phi.png|thumb|350px|right|Figure 7: Unnormalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. ]]
:<math> \varphi ( \mathbf{x} ) \ \stackrel{\mathrm{def}}{=}\ \sum_{i=1}^N a_i \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) </math>
 
where
 
:<math> \rho \big ( \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert \big ) = \exp \left[ -\betabeta_i \left \Vert \mathbf{x} - \mathbf{c}_i \right \Vert ^2 \right] = \exp \left[ -\betabeta_i \left ( x(t) - c_i \right ) ^2 \right] </math>.
 
Since the input is a [[Scalar (mathematics)|scalar]] rather than a [[Vector (spatialgeometric)|vector]], the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight <math> \beta </math> is taken to be a constant equal to 5. The weights <math> c_i </math> are five exemplars from the time series. The weights <math> a_i </math> are trained with projection operator training:
 
:<math> a_i (t+1) = a_i(t) + \nu \big [ x(t+1) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] \frac {\rho \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} {\sum_{i=1}^N \rho^2 \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} </math>
 
where the [[learning rate]] <math> \nu </math> is taken to be 0.3. The training is performed with one pass through the 100 training points. The [[Mean squared error|rms error]] is 0.15.
 
[[Image:060731c Normalized basis functions.png|thumb|350px|right|Figure 8: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. Note the improvement over the unnormalized case. ]]
 
====Normalized radial basis functions====
Line 265 ⟶ 346:
:<math> a_i (t+1) = a_i(t) + \nu \big [ x(t+1) - \varphi \big ( \mathbf{x}(t), \mathbf{w} \big ) \big ] \frac {u \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} {\sum_{i=1}^N u^2 \big ( \left \Vert \mathbf{x}(t) - \mathbf{c}_i \right \Vert \big )} </math>
 
where the [[learning rate]] <math> \nu </math> is again taken to be 0.3. The training is performed with one pass through the 100 training points. The [[Mean squared error|rms error]] on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.
 
[[ImageFile:060803bChaotic chaoticTime time seriesSeries predictionPrediction.pngsvg|thumb|350px|right|Figure 9: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) as a function of time. Note that the approximation is good for only a few time steps. This is a general characterisitccharacteristic of chaotic time series. ]]
 
===Time series prediction===
 
Once the underlytingunderlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:
 
:<math> \varphi(0) = x(1)</math>
Line 279 ⟶ 360:
:<math> {x}(t+1) \approx \varphi(t)=\varphi [\varphi(t-1)]</math>.
 
A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the the time series estimate for several time steps.
 
Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the [[Lyapunov exponent]].
Line 285 ⟶ 366:
===Control of a chaotic time series===
 
[[ImageFile:060808 control of logistic map.pngsvg|thumb|350px|right|Figure 10: Control of the logistic map. The system is allowed to evolve naturally for 49 time steps. At time 50 control is turned on. The desired trajectory for the time series is red. The system under control learns the underlying dynamics and drives the time series to the desired output. The architecture is the same as for the time series prediction example.]]
We assume the output of the logistic map can be manipulated through a control parameter <math> c[ x(t),t] </math> such that
 
:<math> {x}^{ }_{ }(t+1) = 4 x(t) [1-x(t)] +c[x(t),t] </math>.
 
The goal is to choose the control parameter in such a way as to drive the time series to a desired output <math> d(t) </math>. This can be done if we choose the control paramerparameter to be
 
:<math> c^{ }_{ }[x(t),t] \ \stackrel{\mathrm{def}}{=}\ -\varphi [x(t)] + d(t+1) </math>
 
where
 
:<math> \varphiy[x(t)] \approx f[x(t)] = x(t+1)- c[x(t),t] </math>
 
is an approximation to the underlying natural dynamics of the system.
Line 309 ⟶ 390:
 
==See also==
* [[Radial basis function kernel]]
 
* [[instance-based learning]]
* [[Artificial neural networks]]
* [[In Situ Adaptive Tabulation]]
* [[Predictive analytics]]
* [[Chaos theory]]
* [[Hierarchical RBF]]
* [[Autoregressive moving average model]]
* [[Cerebellar model articulation controller]]
* [[Autoregressive integrated moving average]]
* [[Instantaneously trained neural networks]]
* [[Autoregressive conditional heteroskedasticity ]]
* [[MitchellSupport Feigenbaumvector machine]]
 
==External links==
*[http://www-bd.fnal.gov/icalepcs/abstracts/PDF/th1ab.pdf Model Predictive Control with radial basis functions]
*[http://www.mib.sk/Handbook%20of%20Neural%20Computation/NCG2_7.pdf Control of a negative ion source]
 
==References==
{{reflist}}
* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see [http://www.ki.inf.tu-dresden.de/~fritzke/FuzzyPaper/node5.html Radial basis function networks according to Moody and Darken]
* T. Poggio and F. Girosi, "Networks for approximation and learning," Proc. IEEE 78(9), 1484-1487 (1990).
* [[Roger Jones (physicist and entrepreneur) | Roger D. Jones]], Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, “[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=137644 Function approximation and time series prediction with neural networks],” Proceedings of the International Joint Conference on Neural Networks, June 17-21, p. I-649 (1990).
* {{cite book | author=Martin D. Buhmann, M. J. Ablowitz | title=Radial Basis Functions : Theory and Implementations | publisher= Cambridge University| year=2003 | id=ISBN 0-521-63338-9}}
* {{cite book | author=Yee, Paul V. and Haykin, Simon | title=Regularized Radial Basis Function Networks: Theory and Applications | publisher= John Wiley| year=2001 | id=ISBN 0-471-35349-3}}
* John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in {{cite book | author=Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors | title=Artificial Intelligence in the Capital Markets | ___location= Chicago | publisher=Irwin| year=1995 | id=ISBN 1-55738-811-3}}
 
==Further reading==
[[Category:Neural networks]]
* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see [https://web.archive.org/web/20070302175857/http://www.ki.inf.tu-dresden.de/~fritzke/FuzzyPaper/node5.html Radial basis function networks according to Moody and Darken]
[[Category:Information technology]]
* T. Poggio and F. Girosi, "[http://courses.cs.tamu.edu/rgutier/cpsc636_s10/poggio1990rbf2.pdf Networks for approximation and learning]," Proc. IEEE 78(9), 1484-1487 (1990).
[[Category:Computer network analysis]]
* Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, [https://ieeexplore.ieee.org/Xplore/home.jsp;jsessionid=1BAA8854614AFC21D2C29CDB4FC7DBEB Function approximation and time series prediction with neural networks], Proceedings of the International Joint Conference on Neural Networks, June 17–21, p.&nbsp;I-649 (1990).
[[Category:Networks]]
* {{cite book | author=Martin D. Buhmann | title=Radial Basis Functions: Theory and Implementations | publisher= Cambridge University| year=2003 | isbn=0-521-63338-9}}
[[Category:Cybernetics]]
* {{cite book |author1=Yee, Paul V. |author2=Haykin, Simon |name-list-style=amp | title=Regularized Radial Basis Function Networks: Theory and Applications | publisher= John Wiley| year=2001 | isbn=0-471-35349-3}}
[[Category:Artificial intelligence]]
* {{cite book|first1=John R.|last1=Davies|first2=Stephen V.|last2=Coggeshall |author3-link=Roger Jones (physicist and entrepreneur)|first3=Roger D.|last3=Jones|first4= Daniel|last4=Schutzer|contribution=Intelligent Security Systems|editor1-last=Freedman|editor1-first=Roy S.|editor2-last= Flein|editor2-first= Robert A.|editor3-last= Lederman|editor3-first= Jess | title=Artificial Intelligence in the Capital Markets | ___location= Chicago | publisher=Irwin| year=1995 | isbn=1-55738-811-3}}
[[Category:Interpolation]]
* {{cite book | author=Simon Haykin | title=Neural Networks: A Comprehensive Foundation | edition=2nd | ___location=Upper Saddle River, NJ | publisher=Prentice Hall| year=1999 | isbn=0-13-908385-5}}
* S. Chen, C. F. N. Cowan, and P. M. Grant, "[https://eprints.soton.ac.uk/251135/1/00080341.pdf Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks]", IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991.
 
[[Category:Neural network architectures]]
[[bg:Невронна мрежа]]
[[Category:Computational statistics]]
[[de:Radiale Basisfunktion]]
[[Category:Classification algorithms]]
[[es:Red neuronal artificial]]
[[Category:Machine learning algorithms]]
[[fr:Réseau de neurones]]
[[Category:Regression analysis]]
[[ko:신경망]]
[[Category:1988 in artificial intelligence]]
[[hr:neuronska mreža]]
[[ja:ニューラルネットワーク]]
[[pl:Sieć neuronowa]]
[[pt:Rede neural]]
[[ro:Reţele neuronale]]
[[ru:Нейронная сеть]]
[[sl:nevronska mreža]]
[[zh:神经网络]]