Apriori algorithm: Difference between revisions

Content deleted Content added
update url on rebranded site
Citation bot (talk | contribs)
Add: doi, pages. | Use this bot. Report bugs. | Suggested by Dominic3203 | Linked from User:LinguisticMystic/cs/outline | #UCB_webform_linked 93/2277
 
(41 intermediate revisions by 31 users not shown)
Line 1:
{{Short description|Algorithm for frequent item set mining and association rule learning over transactional databases}}
{{Refimprove|date=September 2018}}
'''Apriori'''<ref name=apriori>Rakesh Agrawal and Ramakrishnan Srikant .[http://www.vldb.org/conf/1994/P487.PDF Fast algorithms for mining association rules]. Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, pages 487-499, Santiago, Chile, September 1994.</ref> is an [[algorithm]] for frequent item set mining and [[association rule learning]] over relational [[relational databases]]. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine [[association rules]] which highlight general trends in the [[database]]: this has applications in domains such as [[market basket analysis]].
 
== Overview ==
 
The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on [[database]]s containing transactions (for example, collections of items bought by customers, or details of a website frequentation or [[IP address]]es<ref>{{usurped|1=[https://web.archive.org/web/20210822191810/https://deductive.com/blogs/data-science-ip-matching/ The data science behind IP address matching]}} Published by deductive.com, September 6, 2018, retrieved September 7, 2018</ref>). Other algorithms are designed for finding association rules in data having no transactions ([[Winepi]] and Minepi), or having no timestamps ([[DNA sequencing]]). Each transaction is seen as a set of items (an ''itemset''). Given a threshold <math>C</math>, the Apriori algorithm identifies the item sets which are subsets of at least <math>C</math> transactions in the database.
 
Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as ''candidate generation''), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found.
 
Apriori uses [[breadth-first search]] and a [[Hash tree (persistent data structure)|Hash tree]] structure to count candidate item sets efficiently. It generates candidate item sets of length <math>k</math> from item sets of length <math>k-1</math>. Then it prunes the candidates which have an infrequent sub pattern. According to the [[downward closure lemma]], the candidate set contains all frequent <math>k</math>-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates.
 
The pseudo code for the algorithm is given below for a transaction database <math>T</math>, and a support threshold of <math>\epsilonvarepsilon</math>. Usual set theoretic notation is employed, though note that <math>T</math> is a [[multiset]]. <math>C_k</math> is the candidate set for level <math>k</math>. At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma. <math>\mathrm{count}[c]</math> accesses a field of the data structure that represents candidate set <math>c</math>, which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies.
 
'''Apriori'''(T, ε)
<math>
L<sub>1</sub> ← {large singleton itemsets}
\begin{align}
k ← 2
& \mathrm{Apriori}(T,\epsilon)\\
'''while''' L<sub>k−1</sub> '''is not''' empty
&\qquad L_1 \gets \{ \mathrm{large~1-item sets} \} \\
C<sub>k</sub> ← Generate_candidates(L<sub>k−1</sub>, k)
&\qquad k \gets 2\\
'''for''' transactions t '''in''' T
&\qquad \mathrm{\textbf{while}}~ L_{k-1} \neq \ \emptyset \\
D<sub>t</sub> ← {c in C<sub>k</sub> : c ⊆ t}
&\qquad \qquad C_k \gets \{ c=a \cup \{b\} \mid a \in L_{k-1} \land b \not \in a, \{s \subseteq c \mid |s| = k-1 \} \subseteq L_{k-1} \}\\
'''for''' candidates c '''in''' D<sub>t</sub>
&\qquad \qquad \mathrm{\textbf{for}~transactions}~t \in T\\
count[c] ← count[c] + 1
&\qquad \qquad\qquad D_t \gets \{c \in C_k \mid c \subseteq t \} \\
&\qquad \qquad\qquad \mathrm{\textbf{for}~candidates}~c \in D_t\\
L<sub>k</sub> ← {c in C<sub>k</sub> : count[c] ≥ ε}
&\qquad \qquad\qquad\qquad \mathit{count}[c] \gets \mathit{count}[c]+1\\
k ← k + 1
&\qquad \qquad L_k \gets \{c \in C_k \mid \mathit{count}[c] \geq \epsilon \}\\
&\qquad \qquad k \gets k+1\\
'''return''' Union(L<sub>k</sub>) '''over all''' k
&\qquad \mathrm{\textbf{return}}~\bigcup_k L_k
\end{align}
'''Generate_candidates'''(L, k)
</math>
result ← empty_set()
'''for all''' p ∈ L, q ∈ L '''where''' p and q differ in exactly one element
c ← p ∪ q
'''if''' u ∈ L '''for all''' u ⊆ c '''where''' |u| = k-1
result.add(c)
'''return''' result
 
== Examples ==
Line 37 ⟶ 43:
{| class="wikitable"
|-
| &alpha; || &beta; || theta&epsilon;
|-
| &alpha; || &beta; || epsilon&theta;
|-
| &alpha; || &beta; || theta&epsilon;
|-
| &alpha ; || &beta; || epsilon&theta;
|-
| alpha|| beta || theta
|}
 
The association rules that can be determined from this database are the following:
# 100% of sets with &alpha; also contain &beta;
# 50% of sets with &alpha;, &beta; also have &epsilon;
# 50% of sets with &alpha;, &beta; also have &theta;
 
we can also illustrate this through a variety of examples.
Line 60 ⟶ 65:
Let the database of transactions consist of following itemsets:
{| class="wikitable"
|! '''Itemsets'''
|-
| {1,2,3,4}
Line 128 ⟶ 133:
 
We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold.
 
== Limitations ==
 
Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all <math>2^{|S|}-1</math> of its proper subsets.
 
The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is Permanentpermanently in the memory.
 
Also, both the time and space complexity of this algorithm are very high: <math>O\left(2^{|D|}\right)</math>, thus exponential, where <math>|D|</math> is the horizontal width (the total number of items) present in the database.
 
Later algorithms such as [[Max-Miner]]<ref>{{cite journal|author=Bayardo Jr, Roberto J.|title=Efficiently mining long patterns from databases|journal=ACM SIGMOD Record |volume=27|issue=2|year=1998|pages=85–93 |doi=10.1145/276305.276313 |url=http://www.cs.sfu.ca/CourseCentral/741/jpei/readings/baya98.pdf}}</ref> try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach.
 
== References ==