Multiple kernel learning: Difference between revisions

Content deleted Content added
GreenC bot (talk | contribs)
m Rescued 1 archive link; reformat 1 link. Wayback Medic 2.1
m linking
Line 74:
:<math>f(x)=\sum_{i=1}^N\sum_{m=1}^P\alpha_i^mK_m(x_i^m,x^m)+b</math>
 
The parameters <math>\alpha_i^m</math> and <math>b</math> are learned by [[gradient descent]] on a coordinate basis. In this way, each iteration of the descent algorithm identifies the best kernel column to choose at each particular iteration and adds that to the combined kernel. The model is then rerun to generate the optimal weights <math>\alpha_i</math> and <math>b</math>.
 
===Semisupervised learning===