Regularization perspectives on support vector machines: Difference between revisions

Content deleted Content added
Elmackev (talk | contribs)
No edit summary
Elmackev (talk | contribs)
No edit summary
Line 1:
{{context|date=May 2012}}
'''Regularization perspectives on support vector machines''' provide a way of interpreting [[Supportsupport vector machine]]s (SVMs) in the context of other machine learning algorithms. SVM algorithms categorize [[multidimensional]] data, with the goal of fitting the [[training set]] data well, but also avoiding [[overfitting]], so that the solution [[generalize]]s to new data points. [[Regularization]] algorithms also aim to fit training set data and avoid overfitting. They do this by choosing a fitting function that has low error on the training set, but also is not too complicated, where complicated functions are functions with high [[norm]]s in some [[function space]]. Specifically, [[Tikhonov regularization]] algorithms choose a function that minimize the sum of training set error plus the function's norm. The training set error can be calculated with different [[loss function]]s. For example, [[regularized least squares]] is a special case of Tikhonov regularization using the [[squared error loss]] as the loss function.<ref> {{cite web|last=Rosasco|first=Lorenzo|title=Regularized Least-Squares and Support Vector Machines|url=http://www.mit.edu/~9.520/spring12/slides/class06/class06_RLSSVM.pdf}},
</ref>