Content deleted Content added
m disambiguate what looks like differentials |
mNo edit summary |
||
(One intermediate revision by the same user not shown) | |||
Line 28:
In 2018, Guliyev and Ismailov<ref name="guliyev1">{{Cite journal |last1=Guliyev |first1=Namig |last2=Ismailov |first2=Vugar |date=November 2018 |title=Approximation capability of two hidden layer feedforward neural networks with fixed weights |journal=Neurocomputing |volume=316 |pages=262–269 |arxiv=2101.09181 |doi=10.1016/j.neucom.2018.07.075 |s2cid=52285996}}</ref> constructed a smooth sigmoidal activation function providing universal approximation property for two hidden layer feedforward neural networks with less units in hidden layers. In 2018, they also constructed<ref name="guliyev2">{{Cite journal|last1=Guliyev|first1=Namig|last2=Ismailov|first2=Vugar|date=February 2018|title=On the approximation by single hidden layer feedforward neural networks with fixed weights|journal=Neural Networks|volume=98| pages=296–304|doi=10.1016/j.neunet.2017.12.007|pmid=29301110 |arxiv=1708.06219 |s2cid=4932839 }}</ref> single hidden layer networks with bounded width that are still universal approximators for univariate functions. However, this does not apply for multivariable functions.
In 2022, Shen ''et al.''<ref name=shen22>{{cite journal |last1=Shen |first1=Zuowei |last2=Yang |first2=Haizhao |last3=Zhang |first3=Shijun |date=January 2022 |title=Optimal approximation rate of ReLU networks in terms of width and depth |journal=Journal de Mathématiques Pures et Appliquées |volume=157 |pages=101–135 |arxiv=2103.00502 |doi=10.1016/j.matpur.2021.07.009 |s2cid=232075797}}</ref> obtained precise quantitative information on the depth and width required to approximate a target function by deep and wide ReLU neural networks.
=== Quantitative bounds ===
Line 101:
== Arbitrary-depth case ==
The "dual" versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017.<ref name=ZhouLu /> They showed that networks of width ''n'' + 4 with [[ReLU]] activation functions can approximate any [[Lebesgue integration|Lebesgue-integrable function]] on ''n''-dimensional input space with respect to [[L1 distance|<math>L^1</math> distance]] if network depth is allowed to grow. It was also shown that if the width was less than or equal to ''n'', this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper<ref name=ZhouLu /> it was shown that [[ReLU]] networks with width ''n'' + 1 were sufficient to approximate any [[continuous function|continuous]] function of ''n''-dimensional input variables.<ref
{{math theorem
Line 111:
Remark: If the activation is replaced by leaky-ReLU, and the input is restricted in a compact ___domain, then the exact minimum width is<ref name=":1" /> <math>d_m = \max\{n, m, 2\}</math>.
''Quantitative refinement:'' In the case where <math>f:[0, 1]^n \rightarrow \mathbb{R} </math>, (i.e. <math> m = 1 </math>) and <math>\sigma</math> is the [[Rectifier (neural networks)|ReLU activation function]], the exact depth and width for a ReLU network to achieve <math>\varepsilon</math> error is also known.<ref
}}
|