Universal approximation theorem: Difference between revisions

Content deleted Content added
m ISBNs (Build KH)
No edit summary
Line 1:
In the [[mathematics|mathematical]] theory of [[neural networks]], the '''universal approximation theorem''' states<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> that the standard [[Multilayer_perceptron|multilayer]] [[feedforward neural network|feed-forward]] network with a single hidden layer, which contains finite number of hidden [[neuron]]s, is a universal approximator among continuous functions on [[Compact_space|compact subsets]] of [[Euclidean space|'''R'''<sup>n</sup>]], under mild assumptions on the activation function.
 
The [[theorem]] was proved by [[George Cybenko]] in 1989 for a [[sigmoid function|sigmoid]] activation function, thus it is also called the '''Cybenko theorem'''.<ref name=cyb>Cybenko., G. (1989) [http://actcomm.dartmouth.edu/gvc/papers/approx_by_superposition.pdf "Approximations by superpositions of sigmoidal functions"], ''[[Mathematics of Control, Signals, and Systems]]'', 2 (4), 303-314</ref>