Content deleted Content added
Cesarsouza (talk | contribs) m Added reference |
No edit summary |
||
Line 2:
<noinclude>
In mathematics, the '''universal approximation theorem'''
Kurt Hornik: Approximation Capabilities of Multilayer Feedforward Networks.
Neural Networks, vol. 4, 1991.</ref><ref>Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation, 2, Prentice Hall. ISBN 0132733501.</ref> in mathematical terms:
<blockquote>
<i>Let φ(·) be a nonconstant, bounded, and monotome-increasing continuous function. Let ''I''<sub>''m''<sub>0</sub></sub> denote the ''m''<sub>0</sub>-dimensional unit hypercube [0,1]<sup>''m''<sub>0</sub></sup>. The space of continuous functions on ''I''<sub>''m''<sub>0</sub></sub> is denoted by ''C''(''I''<sub>''m''<sub>0</sub></sub>). Then, given any function ''f'' Э ''C''(''I''<sub>''m''<sub>0</sub></sub>) and є > 0, there exist an integer ''m''<sub>1</sub> and sets of real constants ''α''<sub>''i''</sub>, ''b''<sub>''i''</sub> and ''w''<sub>''ij''</sub>, where ''i'' = 1, ..., ''m''<sub>1</sub> and ''j'' = 1, ..., ''m''<sub>0</sub> such that we may define:</i>
<center>
Line 20 ⟶ 19:
</center>
<i>as an approximate realization of the function ''f
<center>
Line 29 ⟶ 28:
<i>
for all ''x''<sub>1</sub>, ''x''<sub>2</sub>, ..., ''x''<sub>''m''<sub>0</sub></sub> that lie in the input space.</i>
</blockquote>
|