Universal approximation theorem: Difference between revisions

Content deleted Content added
Added wl
perceptron != multilayer perceptron; they're very different models
Line 1:
In the [[mathematics|mathematical]] theory of [[neural networks]], the '''universal approximation theorem''' states<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> that the standard [[Multilayer_perceptron|multilayer]]a [[feedforward neural network|feed-forward]] network with a single hidden layer, athe simplest form of the [[multilayer perceptron]], whichcontaining containsa finite number of hidden [[neuron]]s, is a universal approximator among [[continuous functions]] on [[Compact_space|compact subsets]] of [[Euclidean space|'''R'''<sup>n</sup>]], under mild assumptions on the activation function.
 
One of the first versions of the [[theorem]] was proved by [[George Cybenko]] in 1989 for [[sigmoid function|sigmoid]] activation functions.<ref name=cyb>Cybenko., G. (1989) [http://actcomm.dartmouth.edu/gvc/papers/approx_by_superposition.pdf "Approximations by superpositions of sigmoidal functions"], ''[[Mathematics of Control, Signals, and Systems]]'', 2 (4), 303-314</ref>