Rectified linear unit: Difference between revisions

Content deleted Content added
Yoderj (talk | contribs)
Potential problems: Revise differentiability
m cleanup using AWB
Line 5:
 
where ''x'' is the input to a neuron. This is also known as a [[ramp function]] and is analogous to [[half-wave rectification]] in electrical engineering.
This [[activation function]] was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature<ref name="Hahnloser2000">{{cite conference |authors=R Hahnloser, R. Sarpeshkar, M A Mahowald, R. J. Douglas, H.S. Seung |title=Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit |journal=Nature |volume=405 |year=2000 |pages=947–951}}</ref> with strong [[biological]] motivations and mathematical justifications.<ref name="Hahnloser2001">{{cite conference |authors=R Hahnloser, H.S. Seung |year=2001 |title=Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks|conference=NIPS 2001}}</ref>. It has been demonstrated for the first time in 2011 to enable better training of deeper networks ,<ref name="glorot2011">{{cite conference |authors=Xavier Glorot, Antoine Bordes and [[Yoshua Bengio]] |year=2011 |title=Deep sparse rectifier neural networks |conference=AISTATS |url=http://jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf}}</ref>, compared to the widely used activation functions prior to 2011, i.e., the [[Logistic function|logistic sigmoid]] (which is inspired by [[probability theory]]; see [[logistic regression]]) and its more practical<ref>{{cite encyclopedia |authors=[[Yann LeCun]], [[Leon Bottou]], Genevieve B. Orr and [[Klaus-Robert Müller]] |year=1998 |url=http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf |title=Efficient BackProp |editors=G. Orr and K. Müller |encyclopedia=Neural Networks: Tricks of the Trade |publisher=Springer}}</ref> counterpart, the [[hyperbolic tangent]]. The rectifier is, {{as of|2018|lc=y}}, the most popular activation function for [[deep learning|deep neural networks]].<ref>{{cite journal |first1=Yann |last1=LeCun |first2=Yoshua |last2=Bengio |first3=Geoffrey |last3=Hinton |title=Deep learning |journal=Nature |volume=521 |issue=7553 |year=2015 |pages=436–444 |doi=10.1038/nature14539 |pmid=26017442|bibcode=2015Natur.521..436L }}</ref><ref>{{cite arXiv |last1=Ramachandran |first1=Prajit |last2=Barret |first2=Zoph |last3=Quoc |first3=V. Le |date=October 16, 2017 |title=Searching for Activation Functions |eprint=1710.05941 |class=cs.NE}}</ref>
 
A unit employing the rectifier is also called a '''rectified linear unit''' ('''ReLU''').<ref name="nair2010"/>
Line 59:
* Scale-invariant: <math>\max(0, ax) = a \max(0, x) \mbox{ for } a \geq 0</math>.
 
Rectifying activation functions were used to separate specific excitation and unspecific inhibition in the Neural Abstraction Pyramid, which was trained in a supervised way to learn several computer vision tasks.<ref name=NeuralAbstractionPyramid>{{cite book|last=Behnke|first=Sven|year=2003|title=Hierarchical Neural Networks for Image Interpretation|url= https://www.researchgate.net/publication/220688219_Hierarchical_Neural_Networks_for_Image_Interpretation|series=Lecture Notes in Computer Science|volume=2766|publisher=Springer|doi= 10.1007/b11963}}</ref>.
In 2011,<ref name="glorot2011"/> the use of the rectifier as a non-linearity has been shown to enable training deep [[Supervised learning|supervised]] neural networks without requiring [[Unsupervised learning|unsupervised]] pre-training.
Rectified linear units, compared to [[sigmoid function]] or similar activation functions, allow for faster and effective training of deep neural architectures on large and complex datasets.
Line 67:
* Non-zero centered
* Unbounded
* Dying ReLU problem: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state and "dies." This is a form of the [[Vanishing gradient problem|vanishing gradient problem]]. In some cases, large numbers of neurons in a network can become stuck in dead states, effectively decreasing the model capacity. This problem typically arises when the learning rate is set too high. It may be mitigated by using Leaky ReLUs instead.
 
==See also==