Content deleted Content added
Citation bot (talk | contribs) Add: arxiv, pages, bibcode. | Use this bot. Report bugs. | Suggested by Dominic3203 | #UCB_webform 65/199 |
m link to activation functions |
||
Line 4:
In the [[mathematics|mathematical]] theory of [[artificial neural networks]], '''universal approximation theorems''' are theorems<ref name="MLP-UA">{{cite journal |last1=Hornik |first1=Kurt |last2=Stinchcombe |first2=Maxwell |last3=White |first3=Halbert |title=Multilayer feedforward networks are universal approximators |journal=Neural Networks |date=January 1989 |volume=2 |issue=5 |pages=359–366 |doi=10.1016/0893-6080(89)90020-8 }}</ref><ref>Balázs Csanád Csáji (2001) Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> of the following form: Given a family of neural networks, for each function <math>f</math> from a certain [[function space]], there exists a sequence of neural networks <math>\phi_1, \phi_2, \dots</math> from the family, such that <math>\phi_n \to f</math> according to some criterion. That is, the family of neural networks is [[Dense set|dense]] in the function space.
The most popular version states that [[feedforward neural network|feedforward networks]] with non-polynomial [[activation
Universal approximation theorems are existence theorems: They simply state that there ''exists'' such a sequence <math>\phi_1, \phi_2, \dots \to f</math>, and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as [[backpropagation]], might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum).
|