Content deleted Content added
No edit summary |
|||
Line 27:
random neural networks,<ref>{{Cite journal|doi=10.1109/72.737488|title=Function approximation with spiked random networks|year=1999|last1=Gelenbe|first1=Erol|last2=Mao|first2= Zhi Hong|last3=Li|first3=Yan D.|journal=IEEE Transactions on Neural Networks|volume=10|issue=1|pages=3–9|pmid=18252498 |url=https://zenodo.org/record/6817275 }}</ref> and alternative network architectures and topologies.<ref name="kidger" /><ref>{{Cite conference|last1=Lin|first1=Hongzhou|last2=Jegelka|first2=Stefanie|date=2018|title=ResNet with one-neuron hidden layers is a Universal Approximator|url=https://papers.nips.cc/paper/7855-resnet-with-one-neuron-hidden-layers-is-a-universal-approximator|publisher=Curran Associates|pages=6169–6178|journal=Advances in Neural Information Processing Systems |volume=30}}</ref>
In 2023 it was published that a three-layer neural network can approximate any function (''continuous'' and ''not continuous'')<ref>{{cite journal |last1=Ismailov |first1=Vugar E. |title=A three layer neural network can represent any multivariate function |journal=Journal of Mathematical Analysis and Applications |date=July 2023 |volume=523 |issue=1 |pages=127096 |doi=10.1016/j.jmaa.2023.127096 |arxiv=2012.03016 |s2cid=265100963 }}</ref>
== Arbitrary-width case ==
|