Content deleted Content added
GraziePrego (talk | contribs) Adding short description: "Concept in machine learning" |
No edit summary Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
Line 9:
:<math>p(\vec{x},y)=p(y\mid\vec{x}) p(\vec{x}).</math>
Within classification, several commonly used [[loss functions]] are written solely in terms of the product of the true label <math>y</math> and the predicted label <math>f(\vec{x})</math>. Therefore, they can be defined as functions of only one variable <math>\upsilon=y f(\vec{x})</math>, so that <math>V(f(\vec{x}),y) = \phi(yf(\vec{x})) = \phi(\upsilon)</math> with a suitably chosen function <math>\phi:\mathbb{R}\to\mathbb{R}</math>. These are called '''margin-based loss functions'''. Choosing a margin-based loss function amounts to choosing <math>\phi</math>. Selection of a loss function within this framework impacts the optimal <math>f^{*}_\phi</math> which minimizes the expected risk, see [[empirical risk minimization]].
In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically,
|