Data stream clustering: Difference between revisions

Content deleted Content added
Bfoteini (talk | contribs)
Restored revision 1181464782 by Headbomb (talk): Rv unsourced additions
 
(91 intermediate revisions by 37 users not shown)
Line 1:
In [[computer science]], '''data stream [[cluster analysis | clustering]]''' is defined as the [[cluster analysis|clustering]] of data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied underas thea [[streaming algorithm | data stream model]] of computation and the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time. <!-- in contrary to the traditional clustering where data are static. -->
 
<!-- in contrary to the traditional clustering where data are static. -->
 
== History ==
Data stream clustering has recently attracted attention for emerging applications that involve large amounts of streaming data. For clustering, [[k-means clustering | k-means]] is a widely used heuristic but alternate algorithms have also been developed such as [[k-Medoidsmedoids]], [[CURE data clustering algorithm | CURE]] and the popular{{citation needed|date=July 2017}} [[BIRCH (data clustering) | BIRCH]]. For data streams, one of the first results appeared in 1980 <ref>{{cite journal | first1 = J. | last1 = Munro and| first2 = M. | last2 = Paterson. [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber| title =4567985 Selection and Sorting with Limited Storage]. ''| journal = Theoretical Computer Science'', | volume = 12 | issue = 3 | pages 315-323,= 315–323 | date = 1980 | doi = 10.1016/0304-3975(80)90061-4 | doi-access = free }}</ref> but the model was formalized in 1998 .<ref>{{cite journal | first1 = M. | last1 = Henzinger, | first2 = P. | last2 = Raghavan, and| first3 = S. | last3 = Rajagopalan. [http://| citeseerx.ist.psu.edu/viewdoc/summary?doi = 10.1.1.19.9554 | title = Computing on Data Streams]. ''| journal = Digital Equipment Corporation, | volume = TR-1998-011'', | date = August 1998. }}</ref>.
 
== Definition ==
The problem of data stream clustering is defined as:
 
'''Input:''' a sequence of <math>''n</math>'' points in metric space and an integer <math>''k</math>''.<br />
'''Output:''' <math>''k</math>'' centers in the set of the <math>''n</math>'' points so as to minimize the sum of distances from data points to their closest cluster centers.
 
This is the streaming version of the k-median problem.
 
== Algorithms ==
<!-- Unlike online algorithms, algorithms for data stream clustering have only a bounded amount of memory available and they may be able to take action after a group of points arrives while online algorithms are required to take action after each point arrives. -->
 
==== STREAM ====
 
STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and O'Callaghan <ref name=cds >{{cite book | first1 = S. | last1 = Guha, | first2 = N. | last2 = Mishra, | first3 = R. | last3 = Motwani, | first4 = L. | last4 = O'Callaghan. [http://| title = Proceedings 41st Annual Symposium on Foundations of Computer Science | chapter = Clustering data streams | citeseerx.ist.psu.edu/viewdoc/summary?doi = 10.1.1.32.1927 Clustering| Datapages Streams].= Proceedings359–366 of| thedate Annual= Symposium2000 on| Foundationsdoi of= Computer10.1109/SFCS.2000.892124 Science,| 2000isbn = 0-7695-0850-2 | s2cid = 2767180 }}</ref> which achieves a [[approximation algorithm|constant factor approximation]] for the k-Median problem in a single pass and using small space.
 
'''''Theorem''':{{math theorem | STREAM can solve the ''k''-Median problem on a data stream in a single pass, with time ''O''(''n''<sup>1+''e''</sup>) and space ''θ''(''n''<sup>''ε''</sup>) up to a factor 2<sup>O(1/''e'')</sup>, where ''n'' the number of points and {{tmath|e<1/2''}}.}}
 
TheTo understand STREAM, the first goalstep is to show that clustering can take place in small space (not caring about the number of passes). Small-Space<ref>S. Guha, N. Mishra, R. Motwani, L. O'Callaghan. [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.1927 Clustering Data Streams]. Proceedings of the Annual Symposium on Foundations of Computer Science, 2000</ref> is a [[divide-and-conquer algorithm]] that divides the data, ''S'', into <math>\ell</math> pieces, clusters each one of them (using ''k''-means) and then clusters the centers obtained.
 
[[File:Small-Space.jpg | thumb | 440x140px | right | Small-Space Algorithm representation]]
 
[[File:Small-Space.jpg | thumb | 440x140px | right | Small-Space Algorithm representation]]
 
'''Algorithm Small-Space(S)'''
 
{{ordered list
<ol>
<li>|1 = Divide ''S'' into ''l''<math>\ell</math> disjoint pieces ''χ<sub>1</submath>X_1,... \ldots,χ<sub>l X_{\ell}</submath>''.</li>
<li>|2 = For each ''i'', find ''{{tmath|O(k)''}} centers in ''χX<sub>i</sub>'', using ''k''-means. Assign
each point in ''χX<sub>i</sub>'' to its closest center.</li>
<li>|3 = Let ''χX''' be the ''<math>O(lk\ell k)''</math> centers obtained in (2),
where each center ''c'' is weighted by the number
of points assigned to it.</li>
<li>|4 = Cluster ''χX''' to find ''k'' centers.</li>
}}
</ol>
 
Where, atif stepin Step (2) we userun a bi-criteriabicriteria ''{{tmath|(a,b)''}}-[[approximation algorithm | approximation algorithm]] which outputs at most ''ak'' medians with cost at most ''b'' <math>\forall</math>times the optimum k-Median solution and atin stepStep (3)4 we run a ''c''-approximation algorithm. Thethen the approximation factor of Small-Space() algorithm is ''{{tmath|2c(1+2b)+2b''}}. We can also generalize itSmall-Space so that it recursively calls itself <math>''i</math>'' times on a successively smaller set of weighted centers and achieves a constant factor approximation to the ''k''-median problem.
<br />
<br />
 
The problem with the Small-Space is that the number of subsets <math>\ell</math> that we partition ''S'' into is limited, since it has to store in memory the intermediate medians in ''X''. So, if ''M'' is the size of memory, we need to partition ''S'' into <math>\ell</math> subsets such that each subset fits in memory, (<math>n / \ell</math>) and so that the weighted <math>\ell k</math> centers also fit in memory, <math>\ell k < M</math>. But such an <math>\ell</math> may not always exist.
Where, at step (2) we use a bi-criteria ''(a,b)''-[[approximation algorithm | approximation algorithm]] which outputs at most ''ak'' medians with cost at most ''b'' <math>\forall</math> optimum and at step (3) we run a c-approximation algorithm. The approximation factor of Small-Space() algorithm is ''2c(1+2b)+2b''. We can also generalize it so that it recursively calls itself <math>i</math> times on a successively smaller set of weighted centers.
 
The STREAM algorithm solves the problem of storing intermediate medians and achieves better running time and space requirements. The algorithm works as follows:<ref name=cds />
==== Other Algorithms ====
{{ordered list
|1 = Input the first ''m'' points; using the randomized algorithm presented in<ref name=cds /> reduce these to {{tmath|O(k)}} (say 2''k'') points.
|2 = Repeat the above till we have seen ''m''<sup>2</sup>/(2''k'') of the original data points. We now have ''m'' intermediate medians.
|3 = Using a [[Local search (optimization)|local search]] algorithm, cluster these ''m'' first-level medians into 2''k'' second-level medians and proceed.
|4 = In general, maintain at most ''m'' level-''i'' medians, and, on seeing ''m'', generate 2''k'' level-''i''+ 1 medians, with the weight of a new median as the sum of the weights of the intermediate medians assigned to it.
|5 = When we have seen all the original data points, we cluster all the intermediate medians into ''k'' final medians, using the primal dual algorithm.<ref>{{cite book | first1 = K. | last1 = Jain | first2 = V. | last2 = Vazirani | url = http://portal.acm.org/citation.cfm?id=796509 | title = Primal-dual approximation algorithms for metric facility ___location and k-median problems | journal = Proc. FOCS | pages = 2– | date = 1999 | isbn = 9780769504094 | series = Focs '99 }}</ref>
}}
 
==== Other Algorithmsalgorithms ====
 
Other well-known algorithms used for data stream clustering are:
* [[BIRCH (data clustering)|BIRCH]]:<ref name="birch">{{cite journal | first1 = T. | last1 = Zhang | first2 = R. | last2 = Ramakrishnan | first3 = M. | last3 = Linvy | title = BIRCH: An efficient data clustering method for very large databases | journal = ACM SIGMOD Record |doi=10.1145/235968.233324 | date = 1996 | volume=25 | issue = 2 | pages=103–114| doi-access = free }}</ref> builds a hierarchical data structure to incrementally cluster the incoming points using the available memory and minimizing the amount of I/O required. The complexity of the algorithm is {{tmath|O(N)}} since one pass suffices to get a good clustering (though, results can be improved by allowing several passes).
* BIRCH
* [[Cobweb (clustering)|COBWEB]]:<ref>{{cite journal | first = D. H. | last = Fisher | title = Knowledge Acquisition Via Incremental Conceptual Clustering | journal = Machine Learning | date = 1987 | doi=10.1023/A:1022852608280 | volume=2 | issue = 2 | pages=139–172| doi-access = free }}</ref><ref>{{cite journal | first = D. H. | last = Fisher | citeseerx = 10.1.1.6.9914 | title = Iterative Optimization and Simplification of Hierarchical Clusterings | journal = Journal of AI Research | volume = 4 | date = 1996 | arxiv = cs/9604103 | bibcode = 1996cs........4103F }}</ref> is an incremental clustering technique that keeps a [[hierarchical clustering]] model in the form of a [[Decision tree learning|classification tree]]. For each new point COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a [[Category utility| category utility function]]).
* COBWEB
* [[C2ICM(incremental clustering)|C2ICM]]:<ref>{{cite journal | first = F. | last = Can | title = Incremental Clustering for Dynamic Information Processing | journal = ACM Transactions on Information Systems | volume = 11 | issue = 2 | date = 1993 | pages = 143–164 | doi=10.1145/130226.134466| s2cid = 1691726 | doi-access = free }}</ref> builds a flat partitioning clustering structure by selecting some objects as cluster seeds/initiators and a non-seed is assigned to the seed that provides the highest coverage, addition of new objects can introduce new seeds and falsify some existing old seeds, during incremental clustering new objects and the members of the falsified clusters are assigned to one of the existing new/old seeds.
* [[CluStream (data clustering)|CluStream]]:<ref>{{cite journal |last1=Aggarwal |first1=Charu C. |last2=Yu |first2=Philip S. |last3=Han |first3=Jiawei |last4=Wang |first4=Jianyong |title=A Framework for Clustering Evolving Data Streams |journal=Proceedings 2003 VLDB Conference |date=2003 |pages=81–92 |doi=10.1016/B978-012722442-8/50016-1 |isbn=9780127224428 |s2cid=2354576 |url=http://www.vldb.org/conf/2003/papers/S04P02.pdf |ref=CluStream}}</ref> uses micro-clusters that are temporal extensions of [[BIRCH]]<ref name="birch" /> cluster feature vector, so that it can decide if a micro-cluster can be newly created, merged or forgotten based in the analysis of the squared and linear sum of the current micro-clusters data-points and timestamps, and then at any point in time one can generate macro-clusters by clustering these micro-clustering using an offline clustering algorithm like [[K-Means]], thus producing a final clustering result.
 
== References ==
{{reflist}}
<references> <references/>
 
== Notes ==
 
http://www.cc.gatech.edu/projects/disl/Courses/cs4440/07Fall/project/proposals/Team5Proposal_final.pdf
 
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5222900&isnumber=5222860&tag=1
 
http://www.springerlink.com/content/uc06wwfpc8wl04wf/fulltext.pdf
 
Requirements for Clustering Data Streams, Daniel Barbara
 
[[Category:Cluster analysis algorithms]]
<ref>S. Guha, A. Meyerson, N. Mishra, R. Motwani. ''[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1198387&tag=1 Clustering Data Streams: Theory and Practice]''. IEEE Transactions on Knowledge and Data Engineering, Volume 15, Issue 3, 2003 </ref>