Content deleted Content added
fixed overcounting |
Citation bot (talk | contribs) m Alter: journal, isbn, pages. Add: citeseerx. | You can use this bot yourself. Report bugs here. | Headbomb |
||
Line 23:
| url = http://dspace.mit.edu/handle/1721.1/17549
| title = Everything Old is New Again: a Fresh Look at Historical Approaches in Machine Learning
| journal = Ph.D.
| pages = 18
}}</ref>
Line 55:
== Related Work ==
The first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed by [[Bernhard E Boser]], [[Isabelle M Guyon]], [[Vladimir Vapnik]].<ref name="ReferenceA">{{Cite book | doi = 10.1145/130385.130401| chapter = A training algorithm for optimal margin classifiers| title = Proceedings of the fifth annual workshop on Computational learning theory - COLT '92| pages = 144| year = 1992| last1 = Boser | first1 = B. E. | last2 = Guyon | first2 = I. M. | last3 = Vapnik | first3 = V. N. | isbn =
In 1997, [[E. Osuna]], [[R. Freund]], and [[F. Girosi]] proved a theorem which suggests a whole new set of QP algorithms for SVMs.<ref>{{Cite book | doi = 10.1109/NNSP.1997.622408| chapter = An improved training algorithm for support vector machines| title = Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop| pages =
The SMO algorithm is closely related to a family of optimization algorithms called [[Bregman method]]s or row-action methods. These methods solve convex programming problems with linear constraints. They are iterative methods where each step projects the current primal point onto each constraint.<ref name = "Platt"/>
|