Content deleted Content added
Further reading section |
Citation bot (talk | contribs) Altered template type. Add: class, eprint, volume, series, arxiv, bibcode, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Dominic3203 | #UCB_webform 63/199 |
||
Line 8:
Aside from their empirical performance, activation functions also have different mathematical properties:
; Nonlinear: When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator.<ref>{{Cite journal|author1-link=George Cybenko|last=Cybenko|first=G.|date=December 1989|title=Approximation by superpositions of a sigmoidal function|journal=Mathematics of Control, Signals, and Systems|language=en|volume=2|issue=4|pages=303–314|doi=10.1007/BF02551274|bibcode=1989MCSS....2..303C |s2cid=3958369|issn=0932-4194|url=https://hal.archives-ouvertes.fr/hal-03753170/file/Cybenko1989.pdf }}</ref> This is known as the [[Universal approximation theorem|Universal Approximation Theorem]]. The identity activation function does not satisfy this property. When multiple layers use the identity activation function, the entire network is equivalent to a single-layer model.
; Range: When the range of the activation function is finite, gradient-based training methods tend to be more stable, because pattern presentations significantly affect only limited weights. When the range is infinite, training is generally more efficient because pattern presentations significantly affect most of the weights. In the latter case, smaller [[learning rate]]s are typically necessary.{{citation needed|date=January 2016}}
; Continuously differentiable: This property is desirable ([[Rectifier (neural networks)|ReLU]] is not continuously differentiable and has some issues with gradient-based optimization, but it is still possible) for enabling gradient-based optimization methods. The binary step activation function is not differentiable at 0, and it differentiates to 0 for all other values, so gradient-based methods can make no progress with it.<ref>{{cite book|url={{google books |plainurl=y |id=0tFmf_UKl7oC}}|title=Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms|last=Snyman|first=Jan|date=3 March 2005|publisher=Springer Science & Business Media|isbn=978-0-387-24348-1}}</ref>
Line 46:
=== Other examples ===
Periodic functions can serve as activation functions. Usually the [[Sine wave|sinusoid]] is used, as any periodic function is decomposable into sinusoids by the [[Fourier transform]].<ref>{{Cite journal |
Quadratic activation maps <math>x \mapsto x^2</math>.<ref>{{Citation |last=Flake |first=Gary William |title=Square Unit Augmented Radially Extended Multilayer Perceptrons |date=1998 |work=Neural Networks: Tricks of the Trade |series=Lecture Notes in Computer Science |volume=1524 |pages=145–163 |editor-last=Orr |editor-first=Genevieve B. |url=https://link.springer.com/chapter/10.1007/3-540-49430-8_8 |access-date=2024-10-05 |place=Berlin, Heidelberg |publisher=Springer |language=en |doi=10.1007/3-540-49430-8_8 |isbn=978-3-540-49430-0 |editor2-last=Müller |editor2-first=Klaus-Robert}}</ref><ref>{{Cite journal |
=== Folding activation functions ===
Line 208:
| <math>C^\infty</math>
|-
|Exponential Linear Sigmoid SquasHing (ELiSH)<ref>{{Citation |
|
|<math>\begin{cases}
Line 276:
== Further reading ==
* {{cite
* {{cite journal |
{{Differentiable computing}}
|