Universal approximation theorem: Difference between revisions

Content deleted Content added
​Created page with '=== Universal Approximation Theorem === The universal approximation theorem claims<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Fa...'
 
Line 1:
<div class="boilerplate metadata" id="stub"><table cellpadding="0" cellspacing="0" style="background-color: transparent;"><tr ><td >[[Image:e-to-the-i-pi.svg|30px| ]]</td ><td >''&nbsp;This [[mathematics|mathematics-related]] article is a [[Wikipedia:Perfect stub article|stub]]. You can [[Wikipedia:Find or fix a stub|help]] Wikipedia by [{{SERVER}}/w/index.php?stub&title={{FULLPAGENAMEE}}&action=edit expanding it]''.</td ></tr ></table ></div >[[Category:Mathematics stubs]]
=== Universal Approximation Theorem ===
<noinclude>
 
TheIn mathematics, the universal approximation theorem claims<ref>Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary</ref> that the standard multilayer feed-forward networks with a single hidden layer that contains finite number of hidden neurons, and with arbitrary activation function2function are universal approximators in C(Rm). Kurt Hornik (1991) showed that it is not the specific choice of the activation function, but rather the multilayer feedforward architecture itself which gives neural networks the potential of being universal approximators. The output units are always assumed to be linear. For notational convenience we shall explicitly formulate our results only for the case where there is only one output unit. (The general case can easily be deduced from the simple case.) The theorem<ref>G. Cybenko. Approximations by superpositions of sigmoidal functions. Mathematics of Control, Signals, and Systems, 2:303–314, 1989.</ref><ref>
Kurt Hornik: Approximation Capabilities of Multilayer Feedforward Networks.
Neural Networks, vol. 4, 1991.</ref> in mathematical terms: