Content deleted Content added
Expand and move the "other names" paragraph out of the first paragraph. |
→Definitions: fixing sign mistake Tags: Manual revert Mobile edit Mobile web edit |
||
(18 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description|Piecewise function that clamps its input to be non-negative}}
{{Refimprove|date=January 2017}}
[[File:Ramp function.svg|thumb|325px|[[Graph of a function|Graph]] of the ramp function]]
Line 4 ⟶ 5:
The '''ramp function''' is a [[unary function|unary]] [[real function]], whose [[graph of a function|graph]] is shaped like a [[ramp]]. It can be expressed by numerous [[#Definitions|definitions]], for example "0 for negative inputs, output equals input for non-negative inputs". The term "ramp" can also be used for other functions obtained by [[scaling and shifting]], and the function in this article is the ''unit'' ramp function (slope 1, starting at 0).
In mathematics, the ramp function is also known as the [[positive part]].
In [[machine learning]], it is commonly known as a [[Rectifier_(neural_networks)|ReLU]] [[activation function]]<ref name='brownlee'>{{cite web |last1=Brownlee |first1=Jason |title=A Gentle Introduction to the Rectified Linear Unit (ReLU) |url=https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/ |website=Machine Learning Mastery |access-date=8 April 2021 |date=8 January 2019}}</ref><ref name="medium-relu">{{cite web |last1=Liu |first1=Danqing |title=A Practical Guide to ReLU |url=https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7 |website=Medium |access-date=8 April 2021 |language=en |date=30 November 2017}}</ref> or a [[Rectifier (neural networks)|rectifier]] in analogy to [[half-wave rectification]] in [[electrical engineering]]. In [[statistics]] (when used as a [[likelihood function]] it is known as a [[tobit model]].▼
▲In [[machine learning]], it is commonly known as a [[Rectifier_(neural_networks)|ReLU]] [[activation function]]<ref name='brownlee'>{{cite web |last1=Brownlee |first1=Jason |title=A Gentle Introduction to the Rectified Linear Unit (ReLU) |url=https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/ |website=Machine Learning Mastery |access-date=8 April 2021 |date=8 January 2019}}</ref><ref name="medium-relu">{{cite web |last1=Liu |first1=Danqing |title=A Practical Guide to ReLU |url=https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7 |website=Medium |access-date=8 April 2021 |language=en |date=30 November 2017}}</ref> or a [[Rectifier (neural networks)|rectifier]] in analogy to [[half-wave rectification]] in [[electrical engineering]]. In [[statistics]] (when used as a [[likelihood function]]) it is known as a [[tobit model]].
This function has numerous [[#Applications|applications]] in mathematics and engineering, and goes by various names, depending on the context.▼
▲This function has numerous [[#Applications|applications]] in mathematics and engineering, and goes by various names, depending on the context. There are [[Rectifier_(neural_networks)#Other_non-linear_variants|differentiable variants]] of the ramp function.
== Definitions ==
The ramp function ({{math|''R''(''x'') :
* A [[piecewise function]]: <math display="block">R(x) := \begin{cases}
0, & x<0
* Using the [[Iverson bracket]] notation: <math display="block">R(x) := x \cdot [x \geq 0]</math> or <math display="block">R(x) := x \cdot [x > 0]</math>
* The [[Maxima and minima|max function]]: <math display="block">R(x) := \
* The [[arithmetic mean|mean]] of an [[independent variable]] and its [[absolute value]] (a straight line with unity gradient and its modulus): <math display="block">R(x) := \frac{x+|x|}{2} </math> this can be derived by noting the following definition of {{math|max(''a'', ''b'')}}, <math display="block"> \max(a,b) = \frac{a + b + |a - b|}{2} </math> for which {{math|1=''a'' = ''x''}} and {{math|1=''b'' = 0}}
* The [[Heaviside step function]] multiplied by a straight line with unity gradient: <math display="block">R\left( x \right) := x H(x)</math>▼
* The [[convolution]] of the Heaviside step function with itself: <math display="block">R\left( x \right) := H(x) * H(x)</math>
* The [[integral]] of the Heaviside step function:<ref>{{MathWorld|title=Ramp Function|id=RampFunction}}</ref> <math display="block">R(x) := \int_{-\infty}^{x} H(\xi)\,d\xi</math>▼
▲* The [[Heaviside step function]] multiplied by a straight line with unity gradient:
* [[Macaulay brackets]]: <math display="block">R
* The [[
* As a limit function: <math display="block">R\left( x \right) :=
It could approximated as close as desired by choosing an increasing positive value <math> a>0 </math>.
▲* The [[integral]] of the Heaviside step function:<ref>{{MathWorld|title=Ramp Function|id=RampFunction}}</ref>
== Applications ==
Line 42:
In the whole [[___domain of a function|___domain]] the function is non-negative, so its [[absolute value]] is itself, i.e.
▲: <math>\forall x \in \mathbb{R}: R(x) \geq 0 </math>
and
▲: <math>\left| R (x) \right| = R(x)</math>
▲* '''Proof:''' by the mean of definition 2, it is non-negative in the first quarter, and zero in the second; so everywhere it is non-negative.
=== Derivative ===
Its derivative is the [[Heaviside step function]]:
▲: <math>R'(x) = H(x)\quad \mbox{for } x \ne 0.</math>
=== Second derivative ===
The ramp function satisfies the differential equation:
▲: <math> \frac{d^2}{dx^2} R(x - x_0) = \delta(x - x_0), </math>
where {{math|''δ''(''x'')}} is the [[Dirac delta]]. This means that {{math|''R''(''x'')}} is a [[Green's function]] for the second derivative operator. Thus, any function, {{math|''f''(''x'')}}, with an integrable second derivative, {{math|''f''″(''x'')}}, will satisfy the equation:
▲: <math> f(x) = f(a) + (x-a) f'(a) + \int_{a}^b R(x - s) f''(s) \,ds \quad \mbox{for }a < x < b .</math>
=== [[Fourier transform]] ===
where {{math|''δ''(''x'')}} is the [[Dirac delta]] (in this formula, its [[derivative]] appears).
=== [[Laplace transform]] ===
The single-sided [[Laplace transform]] of {{math|''R''(''x'')}} is given as follows,<ref>{{Cite web| url=https://lpsa.swarthmore.edu/LaplaceXform/FwdLaplace/LaplaceFuncs.html#Ramp| title=The Laplace Transform of Functions| website=lpsa.swarthmore.edu |access-date=2019-04-05}}</ref>
▲: <math> \mathcal{L}\big\{R(x)\big\} (s) = \int_{0}^{\infty} e^{-sx}R(x)dx = \frac{1}{s^2}. </math>
== Algebraic properties ==
Line 78 ⟶ 69:
=== Iteration invariance ===
Every [[iterated function]] of the ramp mapping is itself, as
{{math proof | proof =
▲: <math> R \big( R(x) \big) = R(x) .</math>
This applies the [[#Non-negativity|non-negative property]].}}▼
▲::<math> R \big( R(x) \big) := \frac{R(x)+|R(x)|}{2} = \frac{R(x)+R(x)}{2} = R(x) .</math>
▲This applies the [[#Non-negativity|non-negative property]].
==See also==
* [[Tobit model]]<!-- Models non-negative output as ramp function of a latent variable. -->
* [[Rectifier (neural networks)]]<!-- Applications of ramp function and show some approximations. -->
== References ==
|