Loss functions for classification: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
m Add: citeseerx. | You can use this bot yourself. Report bugs here. | User-activated.
Line 20:
impacts the optimal <math>f^{*}_S</math> which [[empirical risk minimization |minimizes empirical risk]], as well as the computational complexity of the learning algorithm.
 
Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for [[false positives and false negatives]]) would be the [[0-1 loss function]] (0–1 [[indicator function]]), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by
:<math>V(f(\vec{x}),y)=H(-yf(\vec{x}))</math>
where <math>H</math> indicates the [[Heaviside step function]].