Loss functions for classification: Difference between revisions

Content deleted Content added
Kjross (talk | contribs)
No edit summary
Kjross (talk | contribs)
No edit summary
Line 6:
 
Given the binary nature of classification, a natural selection for a loss function (assuming equal disdain for [[false positives and false negatives]]) would be the 0-1 [[indicator function]] which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. Consequently, we could choose the loss function:
:<math>V(f(\vec{x}),y)=\mathbf{\theta}(-yf(\vec{x}))</math>
where <math>\mathbf{\theta}</math> indicates the [[Heaviside step function]].
 
== Square Loss ==