Content deleted Content added
→Related Work: Remove middle names |
|||
Line 55:
== Related Work ==
The first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed by [[Bernhard
In 1997, [[E. Osuna]], [[R. Freund]], and [[F. Girosi]] proved a theorem which suggests a whole new set of QP algorithms for SVMs.<ref>{{Cite book | doi = 10.1109/NNSP.1997.622408| chapter = An improved training algorithm for support vector machines| title = Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop| pages = 276–285| year = 1997| last1 = Osuna | first1 = E. | last2 = Freund | first2 = R. | last3 = Girosi | first3 = F. | isbn = 978-0-7803-4256-9| citeseerx = 10.1.1.392.7405}}</ref> By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A sequence of QP sub-problems that always add at least one violator of the [[Karush–Kuhn–Tucker conditions|Karush–Kuhn–Tucker (KKT) conditions]] is guaranteed to converge. The chunking algorithm obeys the conditions of the theorem, and hence will converge.<ref name = "Platt"/> The SMO algorithm can be considered a special case of the Osuna algorithm, where the size of the optimization is two and both Lagrange multipliers are replaced at every step with new multipliers that are chosen via good heuristics.<ref name = "Platt"/>
|