Content deleted Content added
Added plots of Cross-entropy and other loss functions used to train ANNs. Tags: Reverted Visual edit |
Ohnoitsjamie (talk | contribs) m Reverted edits by 14.139.180.133 (talk) to last version by 66.219.250.136 |
||
Line 98:
|<math>\arctan(v)+\frac{1}{2}</math>
|<math>\tan(\eta-\frac{1}{2})</math>
|}<br />The sole minimizer of the expected risk, <math>f^*_{\phi}</math>, associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding <math>▼
▲The sole minimizer of the expected risk, <math>f^*_{\phi}</math>, associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding <math>
f(\eta)
</math>. This holds even for the nonconvex loss functions, which means that gradient descent based algorithms such as [[gradient boosting]] can be used to construct the minimizer.
|