Loss functions for classification: Difference between revisions

Content deleted Content added
Savage loss: Made the parentheses look better.
Line 154:
The Savage loss<ref name=":0" /> can be generated using (2) and Table-I as follows
 
:<math>\phi(v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)] = \left(\frac{e^v}{1+e^v}\right)\left(1-\frac{e^v}{1+e^v}\right)+\left(1-\frac{e^v}{1+e^v}\right)\left(1-\frac{2e^v}{1+e^v}\right) = \frac{1}{(1+e^v)^2}.</math>
 
The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss has been used in [[gradient boosting]] and the SavageBoost algorithm.