Content deleted Content added
Nikkimaria (talk | contribs) Undid revision 148506745 by 81.227.86.243 (talk) |
Open access status updates in citations with OAbot #oabot |
||
(16 intermediate revisions by 14 users not shown) | |||
Line 1:
{{Short description|Method of using a pool of algorithms}}
In [[machine learning]], '''
| last1 = Littlestone
| first1 = N.
| last2 = Warmuth
| first2 = M.
| title = The Weighted Majority Algorithm
| journal = Information and Computation
| volume = 108
| issue = 2
| date = 1994
| pages = 212–261
| doi=10.1006/inco.1994.1009| doi-access = free
| url = http://www.dklevine.com/archive/refs4575.pdf
}}</ref><ref name="LW89">{{cite conference
| last1 = Littlestone
| first1 = N.
| last2 = Warmuth
| first2 = M.
| title = Weighted Majority Algorithm
| conference = IEEE Symposium on Foundations of Computer Science.
| date = 1989}}</ref>
The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well.
Assume that the problem is a binary [[decision problem]]. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0<β<1.
It can be shown that the [[Upper and lower bounds|upper bounds]] on the number of mistakes made in a given sequence of predictions from a pool of algorithms <math> \mathbf{A} </math> is
:<math>\mathbf{O(log|A|+m)}</math>
Line 9 ⟶ 31:
if one algorithm in <math> \mathbf{x}_i </math> makes at most <math> \mathbf{m} </math> mistakes.
There are many variations of the
==
* [[Randomized weighted majority algorithm]]
==References==
<references/>
[[Category:
|