Winnow (algorithm): Difference between revisions

Content deleted Content added
Restore "set to 0" instead of "dividing by alpha" (as it disagree with the rest of the article).
No edit summary
Line 4:
== Algorithm ==
 
The basic algorithm, Winnow1, is as follows. The instance space is <math>X=\{0,1\}^n</math>, that is, each instance is described as a set of [[Boolean-valued]] [[features (pattern recognition)|features]]. The algorithm maintains non-negative weights <math>w_i</math> for <math>i\in \{1...,\ldots,n\}</math>, which are initially set to 1, one weight for each feature. When the learner is given an example <math>(x_1,...\ldots,x_n)</math>, it applies the typical prediction rule for linear classifiers:
 
* '''If''' <math>\sum_{i=1}^n w_i x_i > \Theta </math>, '''then''' predict 1
Line 22:
 
==Mistake bounds==
In certain circumstances, it can be shown that the number of mistakes Winnow makes as it learns has an [[Upper and lower bounds|upper bound]] that is independent of the number of instances with which it is presented. If the Winnow1 algorithm uses <math>\alpha > 1</math> and <math>\Theta \geq 1/\alpha</math> on a target function that is a <math>k</math>-literal monotone disjunction given by <math>f(x_1,...\ldots,x_n)=x_{i_1}\cup ...\cdots \cup x_{i_k}</math>, then for any sequence of instances the total number of mistakes is bounded by:
<math>\alpha k ( \log_\alpha \Theta+1)+\frac{n}{\Theta}</math>.<ref>
Nick Littlestone (1989). "Mistake bounds and logarithmic linear-threshold learning algorithms".