Universal approximation theorem: Difference between revisions

Content deleted Content added
Formal statement: more cleanup
No edit summary
Line 1:
In mathematics, the '''universal approximation theorem''' states<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> that the standard [[Multilayer_perceptron|multilayer]] [[feedforward neural network|feed-forward]] network with a single hidden layer that contains finite number of hidden [[neuron]]s, and with arbitrary activation function are universal approximators on a compact subset of [[Euclidean space|<math>\mathbb{R}^n</math>]].
 
The [[theorem]] was first proved by [[George Cybenko]] in 1989 for a [[sigmoid function|sigmoid]] activation function, thus it is also called the '''Cybenko theorem'''<ref>http://www.google.com/search?q=Cybenko+theorem</ref>.