Radial basis function network: Difference between revisions

Content deleted Content added
m References: layout
I updated the link for the "Universal Approximation Using Radial-Basis-Function Newtorks", it was obsolete.
Line 40:
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
 
Given certain mild conditions on the shape of the activation function, RBF networks are [[universal approximator]]s on a [[Compact space|compact]] subset of <math>\mathbb{R}^n</math>.<ref name="Park">{{cite journal|last=Park|first=J.|author2=I. W. Sandberg|date=Summer 1991|title=Universal Approximation Using Radial-Basis-Function Networks|journal=Neural Computation|date=Summer 1991|volume=3|issue=2|pages=246–257|url=http://wwwcognet.mitpressjournalsmit.orgedu/doi/absjournal/10.1162/neco.1991.3.2.246|accessdatejournal=26Neural March 2013Computation|volume=3|issue=2|pages=246–257|doi=10.1162/neco.1991.3.2.246|accessdate=26 March 2013|via=}}</ref> This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
 
The parameters <math> a_i </math>, <math> \mathbf{c}_i </math>, and <math> \beta_i </math> are determined in a manner that optimizes the fit between <math> \varphi </math> and the data.