Multiple kernel learning: Difference between revisions

Content deleted Content added
m grammer error
Line 8:
 
==Algorithms==
Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels, however, many algorithms have been developed. The basic idea behind multiple kernel learning algorithms is to add an extra parameter to the minimization problem of the learning algorithm. As an example, consider the case of supervised learning of a linear combination of a set of <math>n</math> kernels <math>K</math>. We introduce a new kernel <math>K'=\sum_{i=1}^n\beta_iK_i</math>, where <math>\beta_ibeta</math> is a vector of coefficients for each kernel. Because the kernels are additive (due to properties of [[reproducing kernel Hilbert spaces]]), this new function is still a kernel. For a set of data <math>X</math> with labels <math>Y</math>, the minimization problem can then be written as
 
:<math>\min_{\beta,c}\Epsilon(Y, K'c)+R(K,c)</math>