Content deleted Content added
No edit summary |
|||
Line 6:
==Algorithms==
Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels. The basic idea behind multiple kernel learning algorithms is as follows: we begin with a set of <math>n</math> kernels <math>K</math>. In the linear case, we introduce a new kernel <math>K'=\sum_{i=1}^n\beta_iK_i</math>, where <math>\beta_i</math> is a vector of coefficients for each kernel. For a set of data <math>X</math> with labels <math>Y</math>, the minimization problem can then be written as
:<math>\min_{\beta,c}\Epsilon(Y, K'c)+R(K'c)</math>
where <math>\Epsilon</math> is an error function and <math>R</math> is a regularization term.
For supervised learning, there are many other algorithms that use different methods to learn the form of the kernel. The following categorization has been proposed by Gonen and Alpaydın (2011) <ref name=review>http://www.jmlr.org/papers/volume12/gonen11a/gonen11a.pdf</ref>
1. Fixed rules, such as the linear combination algorithm described above. These do not require parameterization and use rules like summation and multiplication to combine the kernels. The weighting is learned in the algorithm.
2.
For more information on these methods, see Gonen and Alpaydın (2011) <ref name=review>http://www.jmlr.org/papers/volume12/gonen11a/gonen11a.pdf</ref>
==MKL Libraries==
|