Sequential minimal optimization: Difference between revisions

Content deleted Content added
caps
No edit summary
Tags: Mobile edit Mobile web edit
 
(One intermediate revision by one other user not shown)
Line 1:
{{short description|Algorithm for solving the quadratic programming problem from training SVMs}}
{{Infobox Algorithm
|image=
Line 57 ⟶ 58:
 
== Related work ==
The first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed by [[Bernhard Boser]], [[Isabelle Guyon]], and [[Vladimir Vapnik]].<ref name="ReferenceA">{{Cite book | doi = 10.1145/130385.130401| chapter = A training algorithm for optimal margin classifiers| title = Proceedings of the fifth annual workshop on Computational learning theory - COLT '92| pages = 144| year = 1992| last1 = Boser | first1 = B. E. | last2 = Guyon | first2 = I. M. | last3 = Vapnik | first3 = V. N. | isbn = 978-0897914970| citeseerx = 10.1.1.21.3818| s2cid = 207165665}}</ref> It is known as the "chunking algorithm". The algorithm starts with a random subset of the data, solves this problem, and iteratively adds examples which violate the optimality conditions. One disadvantage of this algorithm is that it is necessary to solve QP-problems scaling with the number of SVs. On real world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.<ref name = "Platt"/>
 
In 1997, [[E. Osuna]], [[R. Freund]], and [[F. Girosi]] proved a theorem which suggests a whole new set of QP algorithms for SVMs.<ref>{{Cite book | doi = 10.1109/NNSP.1997.622408| chapter = An improved training algorithm for support vector machines| title = Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop| pages = 276–285| year = 1997| last1 = Osuna | first1 = E. | last2 = Freund | first2 = R. | last3 = Girosi | first3 = F. | isbn = 978-0-7803-4256-9| citeseerx = 10.1.1.392.7405| s2cid = 5667586}}</ref> By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A sequence of QP sub-problems that always add at least one violator of the [[Karush–Kuhn–Tucker conditions|Karush–Kuhn–Tucker (KKT) conditions]] is guaranteed to converge. The chunking algorithm obeys the conditions of the theorem, and hence will converge.<ref name = "Platt"/> The SMO algorithm can be considered a special case of the Osuna algorithm, where the size of the optimization is two and both Lagrange multipliers are replaced at every step with new multipliers that are chosen via good heuristics.<ref name = "Platt"/>