Winnow (algorithm): Difference between revisions

Content deleted Content added
No edit summary
Nbui (talk | contribs)
m Explicitly specify weights for which x_{i} = 1 are updated. Move away from the word "implicated".
Line 14:
 
* If an example is correctly classified, do nothing.
* If an example is predicted toincorrectly be 1 butand the correct result was 0, allfor ofeach thefeature weights<math>x_{i}=1</math>, implicatedthe incorresponding theweight, mistake<math>w_{i}</math> areis set to 0 (demotion step).<p><math>\forall x_{i} = 1, w_{i} = 0</math></p>
* If an example is predicted toincorrectly be 0 butand the correct result was 1, allfor ofeach thefeature weights<math>x_{i}=1</math>, implicatedthe incorresponding theweight, mistake<math>w_{i}</math> areis multiplied by <math>\alpha</math> (promotion step).<p><math>\forall x_{i} = 1, w_{i} = 2w_{i}</math></p>
 
A typical value for
Here, "implicated" means weights on features of the instance that have value 1. A typical value for <math>\alpha</math> is 2.
<math>\alpha</math> is 2.
 
There are many variations to this basic approach. ''Winnow2''<ref name="littlestone88"/> is similar except that in the demotion step the weights are divided by <math>\alpha</math> instead of being set to 0. ''Balanced Winnow'' maintains two sets of weights, and thus two hyperplanes. This can then be generalized for [[multi-label classification]].