Loss functions for classification: Difference between revisions

Content deleted Content added
Kjross (talk | contribs)
No edit summary
Kjross (talk | contribs)
No edit summary
Line 9:
:<math>V(f(\vec{x}),y)=\mathbf{\theta}(-yf(\vec{x}))</math>
where <math>\mathbf{\theta}</math> indicates the [[Heaviside step function]].
However, this loss function is not convex, making it intractable for most optimization problems. (cite) As a result, we seek continuous, convex '''loss function surrogates''' which are tractable for our learning algorithms. Some of these surrogates are described below.
 
== Square Loss ==
Line 14 ⟶ 15:
== Hinge Loss ==
{{main|Hinge loss}}
Hinge loss
:<math>\ell(y) = \max(0, 1-yf(\vec{x})) = |y - f(\vec{x})|_{+}</math>
 
== Logistic Loss ==