Regularization perspectives on support vector machines: Difference between revisions

Content deleted Content added
AnomieBOT (talk | contribs)
m Dating maintenance tags: {{Who}}
Derivation: rewrite prose in the third person
Line 26:
 
==Derivation<ref>For a detailed derivation, see {{cite book|last=Rifkin|first=Ryan|title=Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning|year=2002|publisher=MIT (PhD thesis)|url=http://web.mit.edu/~9.520/www/Papers/thesis-rifkin.pdf}}</ref>==
To show that the SVM is indeed a special case of Tikhonov regularization using the hinge loss, we{{Who|date=March 2013}} will first state the Tikhonov regularization problem withis written in terms of the hinge loss, thenand demonstrateis thatthen itshown isto be equivalent to traditional formulations of SVM. With the hinge loss, <math> V(y_i,f(x_i)) = (1-yf(x))_+</math> where <math>(s)_+ = max(s,0)</math>, the regularization problem becomes:
 
<math>f = \text{arg}\min_{f\in\mathcal{H}}\left\{\frac{1}{n}\sum_{i=1}^n (1-yf(x))_+ +\lambda||f||^2_\mathcal{H}\right\} </math>,
 
However if we setIf <math>C = \frac{1}{2\lambda n}</math>, wethen this get:yields
 
<math>f = \text{arg}\min_{f\in\mathcal{H}}\left\{C\sum_{i=1}^n (1-yf(x))_+ +\frac{1}{2}||f||^2_\mathcal{H}\right\} </math>.,
 
Thiswhich is equivalent to the standard SVM minimization problem.
 
==Notes and references==