Content deleted Content added
No edit summary |
No edit summary |
||
Line 1:
{{context}}
The '''winnow algorithm''' <ref>Littlestone, N. (1988) '''Learning Quickly When Irrelevant Attributes About: A New Linear-threshold Algorithm''' Machine Learning 285-318(2)</ref> is a technique from machine learning. It is closely related to the [[Perceptron]], but it uses a multiplicative weight-update scheme that allows it perform much better than the perceptron when many dimensions are irrelevant (hence its name). It is not a sophisticated algorithm but it scales well to high-dimensional spaces. During training, winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane.
==The algorithm==
The basic algorithm, Winnow1, is given as follows.
The instance space is <math>X=\{0,1\}^n</math>. The algorithm maintains non-negative weights <math>w_i</math> for <math>i\in \{1...n\}</math> which are initially set to 1. When the learner is given an example $(x_1,...x_n)$, the learner follows the following prediction rule:
* '''If''' <math>\sum_{i=1}^n w_i x_i > \Theta </math>, '''then''' it predicts 1
* '''Otherwise''' it predicts 0
Where <math>\Theta</math> is a real number that is called the 'threshold'. Good bounds are obtained if <math>\Theta=n/2</math>.
The update rule is (loosely):
* If an example is correctly classified, do nothing.
* If an example is
* If an example is predicted to be 0 but the correct result was 1, all of the weights involved in the mistake are multiplied by <math>\alpha</math>.
A good value for <math>\alpha</math> is 2.
Variations are also used.
==References==
===Citations and notes===
<references/>
[[Category:Classification algorithms]]
|