Content deleted Content added
Mark viking (talk | contribs) →Formal statement: Added wl |
Replaced non-working link to Cybenko's paper. |
||
Line 1:
In the [[mathematics|mathematical]] theory of [[neural networks]], the '''universal approximation theorem''' states<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> that a [[feedforward neural network|feed-forward]] network with a single hidden layer containing a finite number of [[neuron]]s, the simplest form of the [[multilayer perceptron]], is a universal approximator among [[continuous functions]] on [[Compact_space|compact subsets]] of [[Euclidean space|'''R'''<sup>n</sup>]], under mild assumptions on the activation function.
One of the first versions of the [[theorem]] was proved by [[George Cybenko]] in 1989 for [[sigmoid function|sigmoid]] activation functions.<ref name=cyb>Cybenko., G. (1989) [http://
Kurt Hornik showed in 1991<ref name=horn> Kurt Hornik (1991) "Approximation Capabilities of Multilayer Feedforward Networks", ''Neural Networks'', 4(2), 251–257 </ref> that it is not the specific choice of the activation function, but rather the multilayer feedforward architecture itself which gives neural networks the potential of being universal approximators. The output units are always assumed to be linear. For notational convenience, only the single output case will be shown. The general case can easily be deduced from the single output case.
|