Content deleted Content added
No edit summary |
|||
Line 76:
when <math>p(1\mid x) \ne 0.5</math>, which matches that of the 0–1 indicator function. This conclusion makes the hinge loss quite attractive, as bounds can be placed on the difference between expected risk and the sign of hinge loss function.<ref name="mit" />
== Generalized Smooth Hinge loss ==
The generalized smooth hinge loss function is defined as
:<math>f^*_\alpha(z) \;=\; \begin{cases} \frac{\alpha}{\alpha + 1}& \text{if }z< 0 \\ \frac{1}{\alpha + 1}z^{\alpha + 1} - z + \frac{\alpha}{\alpha + 1} & \text{if } 0<z<1 \\ 0 & \text{if } z \geq 1 \end{cases}.</math>
Where
:<math>z = yf(\vec{x})</math>
It is monotonically increasing and reaches 0 when :<math>z = 1</math>
== Logistic loss ==
The logistic loss function is defined as
Line 102 ⟶ 111:
:<math>V(f(\vec{x}),y) = e^{-yf(\vec{x})}</math>
It penalizes incorrect predictions more than Hinge loss and has a larger gradient.
== References ==
|