Sequential minimal optimization: Difference between revisions

Content deleted Content added
No edit summary
mNo edit summary
Line 4:
| title = Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines
| url = http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.4376&rep=rep1&type=pdf
}}</ref> SMO is widely used for training support vector machines and is implemented by the popular libsvm tool.<ref>Chih-Chung Chang and Chih-Jen Lin (2001). ''[http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf LIBSVM: a Library for Support Vector Machines]''.</ref><ref>Luca Zanni (2006). ''[http://jmlr.csail.mit.edu/papers/volume7/zanni06a/zanni06a.pdf Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems]''.</ref>
 
The publication of SMO algorithm in 1998 has generated a lot of excitement in SVM community, as previously available methods for SVM training were much more complex and required expensive third-party [[Quadratic programming|QP]] solvers.<ref>{{Citation
Line 17:
== Optimization problem ==
{{main|Support vector machine#Formalization}}
Consider a [[binary classification]] problem with a dataset (''x''<sub>1</sub>, ''y''<sub>1</sub>), ..., (''x''<sub>''n''</sub>, ''y''<sub>''n''</sub>), where ''x''<sub>''i''</sub> is an input vector and {{nobr|''y''<sub>''i''</sub> ∈ {-1, +1} }} is a binary label corresponding to it. A soft-margin [[support vector machine]] is trained by solving an optimization problem whose [[Wolde dual|dual]] is the following convex quadratic programming problem:
 
{{framebox|blue}}