Content deleted Content added
No edit summary |
No edit summary |
||
Line 2:
{{cleanup-context}}
'''Apriori'''
As is common in association rule mining, given a set of <i>itemsets</i> (for instance, sets of retail transactions each
Apriori uses [[breadth-first search]] and a [[hash tree]] structure to count candidate item sets efficiently. It generates candidate item sets of length <math>k</math> from item sets of length <math>k-1</math>. Then it prunes the candidates which have an infrequent sub pattern. According to the [[downward closure lemma]], the candidate set contains all frequent <math>k</math>-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. For determining frequent items quickly, the algorithm uses a hash tree to store candidate itemsets. This hash tree has item sets at the leaves and [[hash table]]s at internal nodes (Zaki, 99). Note that this is not the same kind of [[hash tree]] used in for instance p2p systems
Apriori, while historically significant, suffers from a number of inefficiencies or tradeoffs, which have spawned other algorithms. Candidate generation generates large numbers of subsets
== Algorithm ==
|