Winnow (algorithm): Difference between revisions

Content deleted Content added
m I edited initially as I thought it was wrong. But I was wrong so again changed back to original one
m Algorithm: *: <math>
Line 15:
* If an example is correctly classified, do nothing.
* If an example is predicted incorrectly and the correct result was 0, for each feature <math>x_{i}=1</math>, the corresponding weight <math>w_{i}</math> is set to 0 (demotion step).
*: <math>\forall x_{i} = 1, w_{i} = 0</math>
* If an example is predicted incorrectly and the correct result was 1, for each feature <math>x_{i}=1</math>, the corresponding weight <math>w_{i}</math> multiplied by <math>\{{mvar|&alpha</math> ;}}(promotion step).
*: <math>\forall x_{i} = 1, w_{i} = \alpha w_{i}</math>
 
A typical value for {{mvar|&alpha;}} is 2.
<math>\forall x_{i} = 1, w_{i} = 0</math>
 
There are many variations to this basic approach. ''Winnow2''<ref name="littlestone88"/> is similar except that in the demotion step the weights are divided by <math>\{{mvar|&alpha</math> ;}}instead of being set to 0. ''Balanced Winnow'' maintains two sets of weights, and thus two hyperplanes. This can then be generalized for [[multi-label classification]].
* If an example is predicted incorrectly and the correct result was 1, for each feature <math>x_{i}=1</math>, the corresponding weight <math>w_{i}</math> multiplied by <math>\alpha</math> (promotion step).
 
<math>\forall x_{i} = 1, w_{i} = \alpha w_{i}</math>
 
A typical value for
<math>\alpha</math> is 2.
 
There are many variations to this basic approach. ''Winnow2''<ref name="littlestone88"/> is similar except that in the demotion step the weights are divided by <math>\alpha</math> instead of being set to 0. ''Balanced Winnow'' maintains two sets of weights, and thus two hyperplanes. This can then be generalized for [[multi-label classification]].
 
==Mistake bounds==