Multiple kernel learning: Difference between revisions

Content deleted Content added
Tamhok (talk | contribs)
No edit summary
Tamhok (talk | contribs)
No edit summary
Line 6:
Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels. The basic idea behind multiple kernel learning algorithms is as follows: we begin with a set of <math>n</math> kernels <math>K</math>. In the linear case, we introduce a new kernel <math>K'=\sum_{i=1}^n\beta_iK_i</math>, where <math>\beta_i</math> is a vector of coefficients for each kernel. For a set of data <math>X</math> with labels <math>Y</math>, the minimization problem can then be written as
:<math>\min_{\beta,c}\Epsilon(Y, K'c)+R(K'c)
where <math>\Epsilon</math> is an error function and <math>R</math> is a regularization term. Typically, <math>\EpsilionEpsilon</math> is typically the square loss function (Tikhonov regularization) or the hinge loss function (for [[Support vector machine|SVM]] algorithms), and <math>R</math> is usually an <math>\ell_n</math> norm or some combination of the norms (i.e. [[elastic net regularization]]).
 
==MKL Libraries==