Content deleted Content added
rm redundent cat. |
Geometry guy (talk | contribs) clarify lead sentence |
||
Line 1:
In the [[mathematics|mathematical]] theory of [[neural networks]], the '''universal approximation theorem''' states<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> that the standard [[Multilayer_perceptron|multilayer]] [[feedforward neural network|feed-forward]] network with a single hidden layer that contains finite number of hidden [[neuron]]s, and with arbitrary activation function are universal approximators on a compact subset of [[Euclidean space|'''R'''<sup>n</sup>]].
The [[theorem]] was proved by [[George Cybenko]] in 1989 for a [[sigmoid function|sigmoid]] activation function, thus it is also called the '''Cybenko theorem'''.<ref name=cyb>Cybenko., G. (1989) [http://actcomm.dartmouth.edu/gvc/papers/approx_by_superposition.pdf "Approximations by superpositions of sigmoidal functions"], ''[[Mathematics of Control, Signals, and Systems]]'', 2 (4), 303-314</ref>
|