Loss functions for classification: Difference between revisions

Content deleted Content added
Kjross (talk | contribs)
No edit summary
Kjross (talk | contribs)
Line 14:
While more commonly used in regression, the square loss function can be re-written as a function <math>\phi(yf(\vec{x}))</math> and utilized for classification. Defined as
:<math>V(f(\vec{x}),y) = (1-yf(\vec{x}))^2</math>
the square loss function is both convex and smooth and matches the 0-1 [[indicator function]] when <math>(yf(\vec{x}))= 0</math> and when <math>(yf(\vec{x})) = 1</math>.
 
== Hinge Loss ==