Radial basis function network: Difference between revisions

Content deleted Content added
Reverted to revision 886676627 by Murtaghdl (talk): Rv COI / linkspam (TW)
Citation bot (talk | contribs)
m Add: issue, pmid. Removed URL that duplicated unique identifier. Removed accessdate with no specified URL. Removed parameters. Formatted dashes. | You can use this bot yourself. Report bugs here.| Activated by User:Marianne Zimmerman
Line 40:
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
 
Given certain mild conditions on the shape of the activation function, RBF networks are [[universal approximator]]s on a [[Compact space|compact]] subset of <math>\mathbb{R}^n</math>.<ref name="Park">{{cite journal|last=Park|first=J.|author2=I. W. Sandberg|date=Summer 1991|title=Universal Approximation Using Radial-Basis-Function Networks|url=http://cognet.mit.edu/journal/10.1162/neco.1991.3.2.246|journal=Neural Computation|volume=3|issue=2|pages=246–257|doi=10.1162/neco.1991.3.2.246|accessdate=26 March 2013|viapmid=31167308}}</ref> This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
 
The parameters <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \beta_i </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.
Line 179:
|journal = Neural Networks
|volume = 14
|issue = 4–5
|pages = 439–458
|year = 2001