Loss functions for classification: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 35:
 
which is also equivalent to setting the derivative of the conditional risk equal to zero.
 
 
which can be solved for Thus, minimizers for all of the loss function surrogates described below are easily obtained as functions of only <math>f(\vec{x})</math> and <math>p(1\mid x)</math>.<ref name="mitlec" />
 
Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for [[false positives and false negatives]]) would be the [[0-1 loss function]] (0–1 [[indicator function]]), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by