Content deleted Content added
Marcocapelle (talk | contribs) removed Category:Estimation theory using HotCat |
m →top: t |
||
Line 3:
</ref>
Regularization perspectives on support vector machines interpret SVM as a special case Tikhonov regularization, specifically Tikhonov regularization with the [[hinge loss]] for a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to [[generalize]] without [[overfitting]]. SVM was first proposed in 1995 by [[Corinna Cortes]] and [[Vladimir Vapnik]], and framed geometrically as a method for finding [[hyperplane]]s that can separate [[multidimensional]] data into two categories.<ref>{{cite journal|last=Cortes|first=Corinna|author2=Vladimir Vapnik |title=
</ref><ref name="Lee 2012 67–81">{{cite journal|last=Lee|first=Yoonkyung|author1-link= Yoonkyung Lee |first2=Grace|last2=Wahba|author2-link=Grace Wahba |title=Multicategory Support Vector Machines|journal=Journal of the American Statistical Association|year=2012|volume=99|issue=465|pages=67–81|doi=10.1198/016214504000000098|url=http://www.tandfonline.com/doi/abs/10.1198/016214504000000098}}</ref> This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss.<ref name="Rosasco 2004 1063–1076">{{cite journal|vauthors=Rosasco L, De Vito E, Caponnetto A, Piana M, Verri A |title=Are Loss Functions All the Same|journal=Neural Computation|date=May 2004|volume=16|series=5|pages=1063–1076|doi=10.1162/089976604773135104|url=http://www.mitpressjournals.org/doi/pdf/10.1162/089976604773135104|pmid=15070510}}</ref>
|