Content deleted Content added
Link suggestions feature: 3 links added. |
|||
(5 intermediate revisions by 5 users not shown) | |||
Line 57:
|doi=10.1109/IEMBS.2002.1053230
|title=Mahalanobis distance with radial basis function network on protein secondary structures
|isbn=0-7803-7612-9
|issn=1094-687X
}}</ref>{{Editorializing|date=May 2020}}<!-- Was previously marked with a missing-citation tag asking in what sense using Mahalanobis distance is better and why the Euclidean distance is still normally used, but I found sources to support the first part, so it's likely salvageable. -->) and the radial basis function is commonly taken to be [[Normal distribution|Gaussian]]
Line 69:
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
Given certain mild conditions on the shape of the activation function, RBF networks are [[universal approximator]]s on a [[Compact space|compact]] subset of <math>\mathbb{R}^n</math>.<ref name="Park">{{cite journal|last=Park|first=J.|author2=I. W. Sandberg|s2cid=34868087|date=Summer 1991|title=Universal Approximation Using Radial-Basis-Function Networks|journal=Neural Computation|volume=3|issue=2|pages=246–257|doi=10.1162/neco.1991.3.2.246|pmid=31167308}}</ref> This means that an RBF network with enough hidden neurons can approximate any [[continuous function]] on a closed, bounded set with arbitrary precision.
The parameters <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \beta_i </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.
Line 76:
]]
===
{{multiple images
| align = right
Line 123:
:<math> P\left ( y \mid \mathbf{x} \right ) </math>
is the conditional probability of y given <math> \mathbf{x} </math>.
The conditional probability is related to the joint probability through [[Bayes' theorem]]
:<math> P\left ( y \mid \mathbf{x} \right ) = \frac {P \left ( \mathbf{x} \land y \right )} {P \left ( \mathbf{x} \right )} </math>
Line 228:
\end{matrix} \right]</math>
It can be shown that the interpolation matrix in the above equation is non-singular, if the points <math>\mathbf x_i</math> are distinct, and thus the weights <math>w</math> can be solved by simple [[linear algebra]]:
:<math>\mathbf{w} = \mathbf{G}^{-1} \mathbf{b}</math>
where <math>G = (g_{ij})</math>.
Line 296:
===Logistic map===
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the [[logistic map]], which maps the [[unit interval]] onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore [[function approximation]], [[time series prediction]], and [[control theory]]. The map originated from the field of [[population dynamics]] and became the prototype for [[chaos theory|chaotic]] time series. The map, in the fully chaotic regime, is given by
:<math> x(t+1)\ \stackrel{\mathrm{def}}{=}\ f\left [ x(t)\right ] = 4 x(t) \left [ 1-x(t) \right ] </math>
Line 398:
* [[Cerebellar model articulation controller]]
* [[Instantaneously trained neural networks]]
* [[Support vector machine]]
==References==
Line 417 ⟶ 418:
[[Category:Machine learning algorithms]]
[[Category:Regression analysis]]
[[Category:1988 in artificial intelligence]]
|