Content deleted Content added
cleanup |
No edit summary |
||
Line 8:
Universal approximation theorems are existence theorems: They simply state that there ''exists'' such a sequence <math>\phi_1, \phi_2, \dots \to f</math>, and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as [[backpropagation]], might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum).
Universal approximation theorems are limit theorems: They simply state that that for any <math>f</math> and a criteria of closeness <math>\epsilon > 0</math>, if there are ''enough'' neurons in a neural network, then there exists a neural network with that many neurons that does approximate <math>f</math> to within <math>\epsilon</math>. There is no guarantee that any finite size, say, 10000 neurons, is enough.
== Setup ==
|