Content deleted Content added
remove self link on logistic loss |
added Savage loss generate |
||
Line 135:
:<math>\phi(v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)] = 2\sqrt{(\frac{e^{2v}}{1+e^{2v}})(1-\frac{e^{2v}}{1+e^{2v}})}+(1-\frac{e^{2v}}{1+e^{2v}})(\frac{1-\frac{2e^{2v}}{1+e^{2v}}}{\sqrt{\frac{e^{2v}}{1+e^{2v}}(1-\frac{e^{2v}}{1+e^{2v}})}}) = e^{-v}</math>
The exponential loss is convex and grows exponentially for negative values which
== Savage loss ==
The Savage loss is named in honor of [[Leonard Jimmie Savage|L. J. Savage]] and can be generated using (2) and Table-I as follows
:<math>\phi(v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)] = (\frac{e^v}{1+e^v})(1-\frac{e^v}{1+e^v})+(1-\frac{e^v}{1+e^v})(1-\frac{2e^v}{1+e^v}) = \frac{1}{(1+e^v)^2}</math>
The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss can be used in [[Gradient boosting|Gradient Boosting]] or the SavageBoost algorithm<ref name=":0" />.
== Hinge loss ==
|