Content deleted Content added
No edit summary |
No edit summary |
||
Line 6:
Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels. The basic idea behind multiple kernel learning algorithms is as follows: we begin with a set of <math>n</math> kernels <math>K</math>. In the linear case, we introduce a new kernel <math>K'=\sum_{i=1}^n\beta_iK_i</math>, where <math>\beta_i</math> is a vector of coefficients for each kernel. For a set of data <math>X</math> with labels <math>Y</math>, the minimization problem can then be written as
:<math>\min_{\beta,c}\Epsilon(Y, K'c)+R(K'c)
where <math>\Epsilon</math> is an error function and <math>R</math> is a regularization term. Typically, <math>\
==MKL Libraries==
|