Content deleted Content added
m Open access bot: url-access=subscription updated in citation with #oabot. |
fix template error |
||
Line 2:
In [[mathematical optimization]] and [[decision theory]], a '''loss function''' or '''cost function''' (sometimes also called an error function)<ref name="ttf2001">{{cite book|first1=Trevor |last1=Hastie |authorlink1= |first2=Robert |last2=Tibshirani |authorlink2=Robert Tibshirani|first3=Jerome H. |last3=Friedman |authorlink3=Jerome H. Friedman |title=The Elements of Statistical Learning |publisher=Springer |year=2001 |isbn=0-387-95284-5 |page=18 |url=https://web.stanford.edu/~hastie/ElemStatLearn/}}</ref> is a function that maps an [[event (probability theory)|event]] or values of one or more variables onto a [[real number]] intuitively representing some "cost" associated with the event. An [[optimization problem]] seeks to minimize a loss function. An '''objective function''' is either a loss function or its opposite (in specific domains, variously called a [[reward function]], a [[profit function]], a [[utility function]], a [[fitness function]], etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
In statistics, typically a loss function is used for [[parameter estimation]], and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as [[Pierre-Simon Laplace|Laplace]], was reintroduced in statistics by [[Abraham Wald]] in the middle of the 20th century.<ref>{{cite
[[File:Comparison of loss functions.png|thumb|Comparison of common loss functions ([[Mean absolute error|MAE]], [[Symmetric mean absolute percentage error|SMAE]], [[Huber loss]], and Log-Cosh Loss) used for regression]]
|