Talk:Universal approximation theorem: Difference between revisions

Content deleted Content added
No edit summary
change class to C
 
(4 intermediate revisions by 3 users not shown)
Line 1:
{{WikiProject banner shell|class=C|
{{Maths rating |class=Start |priority=Low |field=applied}}
{{WikiProject Mathematics|priority=Low }}
 
}}
== Deep variants ==
 
Line 68 ⟶ 69:
::::::::: OK, sounds good. [[User:Koertefa|''<span style="color:#2F4F4F">'''K'''<span style="color:Teal">œrte</span>'''F'''</span><span style="color:Teal">a</span>'']] [[User talk:Koertefa#top|<span style="color:#2F4F4F">'''{'''<i style="color:Teal">ταλκ</i>'''}'''</span>]] 19:22, 6 July 2020 (UTC)
 
== To scientific? Not understandable? ==
 
No, I think the article is just fine. At least do not abbreviate it. Perhaps this text could be moved more to the end of the article and a more elementary instruction could be written in an introduction. [[Special:Contributions/139.14.20.177|139.14.20.177]] ([[User talk:139.14.20.177|talk]]) 11:54, 25 April 2024 (UTC)
== Arbitrary Width Case ==
I am wondering if the theorem presented here is correct. In http://www2.math.technion.ac.il/~pinkus/papers/acta.pdf, the neural network has only one node in output layer (so they map to R). Here the numer of output nodes is arbitrary. Is this correct?