Content deleted Content added
No edit summary |
No edit summary |
||
Line 102:
|}<br />The sole minimizer of the expected risk, <math>f^*_{\phi}</math>, associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding <math>
f(\eta)
</math>. This holds even for the nonconvex loss functions which means that gradient descent based algorithms such as [[Gradient boosting|Gradient Boosting]] can be used to construct the minimizer
== Square loss ==
While more commonly used in regression, the square loss function can be re-written as a function <math>\phi(yf(\vec{x}))</math> and utilized for classification. It can be generated using (2) and Table-I as follows
|