Content deleted Content added
Citation bot (talk | contribs) Altered template type. Add: class, eprint. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar |
Corrected incorrect use of plural form of "criterion". Tags: Visual edit Mobile edit Mobile web edit |
||
Line 2:
{{Technical|date=July 2023}}
In the [[mathematics|mathematical]] theory of [[artificial neural networks]], '''universal approximation theorems''' are theorems<ref name="MLP-UA">{{cite journal |last1=Hornik |first1=Kurt |last2=Stinchcombe |first2=Maxwell |last3=White |first3=Halbert |title=Multilayer feedforward networks are universal approximators |journal=Neural Networks |date=January 1989 |volume=2 |issue=5 |pages=359–366 |doi=10.1016/0893-6080(89)90020-8 }}</ref><ref>Balázs Csanád Csáji (2001) Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> of the following form: Given a family of neural networks, for each function <math>f</math> from a certain [[function space]], there exists a sequence of neural networks <math>\phi_1, \phi_2, \dots</math> from the family, such that <math>\phi_n \to f</math> according to some
The most popular version states that [[feedforward neural network|feedforward networks]] with non-polynomial activation functions are dense in the space of continuous functions between two [[Euclidean space]]s, with respect to the [[compact convergence]] topology.
|