Regularization perspectives on support vector machines: Difference between revisions

Content deleted Content added
AnomieBOT (talk | contribs)
m Dating maintenance tags: {{Context}}
Line 1:
{{context|date=May 2012}}
'''Regularization perspectives on support vector machine''' interpret [[Support vector machine]]s (SVMs) as a special case [[Tikhonov regularization]], specifically Tikhonov regularization with the [[hinge loss]] for a [[loss function]]. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to [[generalize]] without [[overfitting]]. SVM was first proposed in 1995 by [[Corinna Cortes]] and [[Vladimir Vapnik]], and framed geometrically as a method for finding [[hyperplane]]s that can separate [[multidimensional]] data into two categories.<ref>{{cite journal|last=Cortes|first=Corinna|coauthors=Vladimir Vapnik|title=Suppor-Vector Networks|journal=Machine Learning|year=1995|volume=20|pages=273-297|doi=10.1007/BF00994018|url=http://www.springerlink.com/content/k238jx04hm87j80g/?MUD=MP}}</ref> This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to other [[machine learning]] techniques for avoiding overfitting like [[regularization]], [[early stopping]], [[sparsity]] and [[Bayesian inference]]. However, once it was discovered that SVM is also a [[special case]] of Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms.<ref> {{cite web|last=Rosasco|first=Lorenzo|title=Regularized Least-Squares and Support Vector Machines|url=http://www.mit.edu/~9.520/spring12/slides/class06/class06_RLSSVM.pdf}},
</ref><ref>{{cite book|last=Rifkin|first=Ryan|title=Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning|year=2002|publisher=MIT (PhD thesis)|url=http://web.mit.edu/~9.520/www/Papers/thesis-rifkin.pdf}}