Content deleted Content added
Line 17:
The most common activation functions can be divided into three categories: [[ridge function]]s, [[radial function]]s and [[fold function]]s.
An activation function <math>f</math> is '''saturating''' if <math>\lim_{|v|\to \infty} |\nabla f(v)| = 0</math>. It is '''nonsaturating''' if it is not saturating. Non-saturating activation functions, such as [[ReLU]], may be better than saturating activation functions,
=== Ridge activation functions ===
Line 242:
{{Main|Quantum function}}
In [[quantum neural networks]] programmed on gate-model [[quantum computers]], based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of each [[perceptron]] at each layer. The quantum properties loaded within the circuit such as superposition can be preserved by creating the [[Taylor series]] of the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.<ref>{{cite journal|doi=10.1007/s11128-022-03466-0 |issn=1570-0755 |title=Quantum activation functions for quantum neural networks|year=2022|last1=Maronese |first1=Marco|last2=Destri |first2=Claudio|last3= Prati|first3=Enrico |journal= Quantum Information Processing |volume=21|issue=4|page=128 |arxiv=2201.03700}}</ref>
==See also==
|