Regularization perspectives on support vector machines: Difference between revisions

Content deleted Content added
m Derivation: spelling
AnomieBOT (talk | contribs)
m Dating maintenance tags: {{Who}}
Line 26:
 
==Derivation<ref>For a detailed derivation, see {{cite book|last=Rifkin|first=Ryan|title=Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning|year=2002|publisher=MIT (PhD thesis)|url=http://web.mit.edu/~9.520/www/Papers/thesis-rifkin.pdf}}</ref>==
To show that SVM is indeed a special case of Tikhonov regularization using the hinge loss, we{{Who|date=March 2013}} will first state the Tikhonov regularization problem with the hinge loss, then demonstrate that it is equivalent to traditional formulations of SVM. With the hinge loss, <math> V(y_i,f(x_i)) = (1-yf(x))_+</math> where <math>(s)_+ = max(s,0)</math>, the regularization problem becomes:
 
<math>f = \text{arg}\min_{f\in\mathcal{H}}\left\{\frac{1}{n}\sum_{i=1}^n (1-yf(x))_+ +\lambda||f||^2_\mathcal{H}\right\} </math>,