Disjoint-set data structure: Difference between revisions

Content deleted Content added
m Moved punctuation mark to correct place + other fixes, References after punctuation per WP:CITEFOOT and WP:PAIC
m Merging two sets: {{multiple image}}
 
(15 intermediate revisions by 7 users not shown)
Line 13:
}}
 
In [[computer science]], a '''disjoint-set data structure''', also called a '''union–find data structure''' or '''merge–find set''', is a [[data structure]] that stores a collection of [[Disjoint sets|disjoint]] (non-overlapping) [[Set (mathematics)|sets]]. Equivalently, it stores a [[partition of a set]] into disjoint [[subset]]s. It provides operations for adding new sets, merging sets (replacing them bywith their [[Union (set theory)|union]]), and finding a representative member of a set. The last operation makes it possible to find outdetermine efficiently ifwhether any two elements arebelong into the same set or to different sets.
 
While there are several ways of implementing disjoint-set data structures, in practice they are often identified with a particular implementation calledknown as a '''disjoint-set forest'''. This is a specialized type of [[Forest (graph theory)|forest]] which performs unionsunion and findsfind operations in near-constant [[Amortized analysis|amortized time]]. To performFor a sequence of {{mvar|m}} addition, union, or find operations on a disjoint-set forest with {{mvar|n}} nodes, requiresthe total time required is {{math|[[Big O notation|''O'']](''m''α(''n''))}}, where {{math|α(''n'')}} is the extremely slow-growing [[inverse Ackermann function]]. Although Disjointdisjoint-set forests do not guarantee this performancetime onper a per-operation basis. Individual union and find operations can take longer than a constant times {{math|α(''n'')}} time, but each operation causesrebalances the disjoint-setstructure forest(via totree adjust itselfcompression) so that successivesubsequent operations arebecome faster. As Disjointa result, disjoint-set forests are both [[asymptotically optimal]] and practically efficient.
 
Disjoint-set data structures play a key role in [[Kruskal's algorithm]] for finding the [[minimum spanning tree]] of a graph. The importance of minimum spanning trees means that disjoint-set data structures underliesupport a wide variety of algorithms. In addition, disjoint-setthese data structures also havefind applications toin symbolic computation, as well asand in compilers, especially for [[register allocation]] problems.
 
== History ==
 
Disjoint-set forests were first described by [[Bernard A. Galler]] and [[Michael J. Fischer]] in 1964.<ref name="Galler1964">{{cite journal|first1=Bernard A.|last1=Galler|author1-link=Bernard A. Galler|first2=Michael J.|last2=Fischer|author2-link=Michael J. Fischer|title=An improved equivalence algorithm|journal=[[Communications of the ACM]]|volume=7|issue=5|date=May 1964|pages=301–303|doi=10.1145/364099.364331|s2cid=9034016 |doi-access=free}}. The paper originating disjoint-set forests.</ref> In 1973, their time complexity was bounded to <math>O(\log^{*}(n))</math>, the [[iterated logarithm]] of <math>n</math>, by [[John Hopcroft|Hopcroft]] and [[Jeffrey Ullman|Ullman]].<ref name="Hopcroft1973">{{cite journal|last1=Hopcroft|first1=J. E.|author1-link=John Hopcroft|last2=Ullman|first2=J. D.|author2-link=Jeffrey Ullman|year=1973|title=Set Merging Algorithms|journal=SIAM Journal on Computing|volume=2|issue=4|pages=294&ndash;303|doi=10.1137/0202024}}</ref> In 1975, [[Robert Tarjan]] was the first to prove the <math>O(m\alpha(n))</math> ([[Ackermann function#Inverse|inverse Ackermann function]]) upper bound on the algorithm's time complexity,.<ref name="Tarjan1984">{{cite journal|first1=Robert E.|last1=Tarjan|author1-link=Robert E. Tarjan|first2=Jan|last2=van Leeuwen|author2-link=Jan van Leeuwen|title=Worst-case analysis of set union algorithms|journal=Journal of the ACM|volume=31|issue=2|pages=245–281|year=1984|doi= 10.1145/62.2160|s2cid=5363073 |doi-access=free}}</ref> He also proved it to be tight. In 1979, he showed that this was the lower bound for a certain class of algorithms, [[pointer algorithm|pointer algorithms]], that include the Galler-Fischer structure.<ref name="Tarjan1979">{{cite journal|first=Robert Endre|last=Tarjan|author-link=Robert E. Tarjan|year=1979|title=A class of algorithms which require non-linear time to maintain disjoint sets|journal=Journal of Computer and System Sciences|volume=18|issue=2|pages=110&ndash;127|doi=10.1016/0022-0000(79)90042-4|doi-access=free }}</ref> In 1989, [[Michael Fredman|Fredman]] and [[Michael Saks (mathematician)|Saks]] showed that <math>\Omega(\alpha(n))</math> (amortized) words of <math>O(\log n)</math> bits must be accessed by ''any'' disjoint-set data structure per operation,<ref name="Fredman1989">{{cite book|first1=M.|last1=Fredman|author-link=Michael Fredman|first2=M.|last2=Saks|title=Proceedings of the twenty-first annual ACM symposium on Theory of computing - STOC '89 |chapter=The cell probe complexity of dynamic data structures |pages=345&ndash;354|date=May 1989|doi=10.1145/73007.73040|isbn=0897913078|s2cid=13470414|quote=Theorem 5: Any CPROBE(log ''n'') implementation of the set union problem requires Ω(''m'' α(''m'', ''n'')) time to execute ''m'' Find's and ''n''&minus;1 Union's, beginning with ''n'' singleton sets. |doi-access=free}}</ref> thereby proving the optimality of the data structure in this model.
 
In 1991, Galil and Italiano published a survey of data structures for disjoint-sets.<ref name="Galil1991">{{cite journal|first1=Z.|last1=Galil|first2=G.|last2=Italiano|title=Data structures and algorithms for disjoint set union problems|journal=ACM Computing Surveys|volume=23|issue=3|pages=319&ndash;344|year=1991|doi=10.1145/116873.116878|s2cid=207160759 }}</ref>
Line 29:
In 2007, Sylvain Conchon and Jean-Christophe Filliâtre developed a semi-[[persistent data structure|persistent]] version of the disjoint-set forest data structure and formalized its correctness using the [[proof assistant]] [[Coq (software)|Coq]].<ref name="Conchon2007">{{cite conference|first1=Sylvain|last1=Conchon|first2=Jean-Christophe|last2=Filliâtre|contribution=A Persistent Union-Find Data Structure|title=ACM SIGPLAN Workshop on ML|___location=Freiburg, Germany|date=October 2007|url=https://www.lri.fr/~filliatr/puf/}}</ref> "Semi-persistent" means that previous versions of the structure are efficiently retained, but accessing previous versions of the data structure invalidates later ones. Their fastest implementation achieves performance almost as efficient as the non-persistent algorithm. They do not perform a complexity analysis.
 
Variants of disjoint-set data structures with better performance on a restricted class of problems have also been considered. Gabow and Tarjan showed that if the possible unions are restricted in certain ways, then a truly linear time algorithm is possible.<ref>Harold N. Gabow, Robert Endre Tarjan, "A linear-time algorithm for a special case of disjoint set union," Journal of Computer and System Sciences, Volume 30, Issue 2, 1985, pp. 209–221, ISSN 0022-0000, https://doi.org/10.1016/0022-0000(85)90014-5</ref> In particular, linear time is achievable if a "union tree" is given a priori. This is a tree that includes all elements of the sets. Let p[''v''] denote the parent in the tree, then the assumption is that union operations must have the form '''union'''(''v'',p[''v'']) for some ''v''.
 
== Representation ==
 
In this and the following section we describe the most common implementation of the disjoint-set data structure, as a forest of
[[parent pointer tree]]s. This representation is known as '''Galler-Fischer trees'''.
 
Each node in a disjoint-set forest consists of a pointer and some auxiliary information, either a size or a rank (but not both). The pointers are used to make [[parent pointer tree]]s, where each node that is not the root of a tree points to its parent. To distinguish root nodes from others, their parent pointers have invalid values, such as a circular reference to the node or a sentinel value. Each tree represents a set stored in the forest, with the members of the set being the nodes in the tree. Root nodes provide set representatives: Two nodes are in the same set if and only if the roots of the trees containing the nodes are equal.
Line 55 ⟶ 58:
'''end function'''
 
This operation has constantlinear time complexity. In particular, initializing a
disjoint-set forest with {{mvar|n}} nodes requires {{math|''O''(''n'')}}
time.
Line 117 ⟶ 120:
 
=== Merging two sets ===
{{multiple image
 
| direction = vertical
[[File:Dsu disjoint sets init.svg|thumb|360px|<code>MakeSet</code> creates 8 singletons.]]
| width =360
[[File:Dsu disjoint sets final.svg|thumb|360px|After some operations of <code>Union</code>, some sets are grouped together.]]
[[File: | image1=Dsu disjoint sets init.svg|thumb|360px|caption1=<code>MakeSet</code> creates 8 singletons.]]
[[File: | image2=Dsu disjoint sets final.svg|thumb|360px|caption2=After some operations of <code>Union</code>, some sets are grouped together.]]
}}
 
The operation <code>Union(''x'', ''y'')</code> replaces the set containing {{mvar|x}} and the set containing {{mvar|y}} with their union. <code>Union</code> first uses <code>Find</code> to determine the roots of the trees containing {{mvar|x}} and {{mvar|y}}. If the roots are the same, there is nothing more to do. Otherwise, the two trees must be merged. This is done by either setting the parent pointer of {{mvar|x}}'s root to {{mvar|y}}'s, or setting the parent pointer of {{mvar|y}}'s root to {{mvar|x}}'s.
 
The choice of which node becomes the parent has consequences for the complexity of future operations on the tree. If it is done carelessly, trees can become excessively tall. For example, suppose that <code>Union</code> always made the tree containing {{mvar|x}} a subtree of the tree containing {{mvar|y}}. Begin with a forest that has just been initialized with elements <math>1, 2, 3, \ldots, n,</math> and execute <code>{{math|Union(1, 2)}}</code>, <code>{{math|Union(2, 3)}}</code>, ..., <code>{{math|Union(''n'' - 1, ''n'')}}</code>. The resulting forest contains a single tree whose root is {{mvar|n}}, and the path from 1 to {{mvar|n}} passes through every node in the tree. For this forest, the time to run <code>Find(1)</code> is {{math|''O''(''n'')}}.
 
In an efficient implementation, tree height is controlled using '''union by size''' or '''union by rank'''. Both of these require a node to store information besides just its parent pointer. This information is used to decide which root becomes the new parent. Both strategies ensure that trees do not become too deep.
Line 184 ⟶ 190:
 
It is clear from the above implementations that the size and rank of a node do not matter unless a node is the root of a tree. Once a node becomes a child, its size and rank are never accessed again.
 
There is a variant of the <code>Union</code> operation in which the user determines the representative of the formed set. It is not hard to add this functionality to the above algorithms without losing efficiency.
 
== Time complexity ==
Line 201 ⟶ 209:
{{anchor|increasing rank lemma}}Lemma 1: As the [[#Disjoint-set forests|find function]] follows the path along to the root, the rank of node it encounters is increasing.
 
{{math proof| We claim that as Find and Union operations are applied to the data set, this fact remains true over time. Initially when each node is the root of its own tree, it's trivially true. The only case when the rank of a node might be changed is when the [[#Disjoint-set forests|Union by Rank]] operation is applied. In this case, a tree with smaller rank will be attached to a tree with greater rank, rather than vice versa. And during the find operation, all nodes visited along the path will be attached to the root, which has larger rank than its children, so this operation won't change this fact either.}}
 
{{anchor|min subtree size lemma}}Lemma 2: A node {{mvar|u}} which is root of a subtree with rank {{mvar|r}} has at least <math>2^r</math> nodes.
Line 212 ⟶ 220:
{{math proof| From [[#min subtree size lemma|lemma 2]], we know that a node {{mvar|u}} which is root of a subtree with rank {{mvar|r}} has at least <math>2^r</math> nodes. We will get the maximum number of nodes of rank {{mvar|r}} when each node with rank {{mvar|r}} is the root of a tree that has exactly <math>2^r</math> nodes. In this case, the number of nodes of rank {{mvar|r}} is <math>\frac{n}{2^r}.</math>}}
 
At any particular point in the execution, we can group the vertices of the graph into "buckets", according to their rank. We define the buckets' ranges inductively, as follows: Bucket 0 contains vertices of rank 0. Bucket 1 contains vertices of rank 1. Bucket 2 contains vertices of ranks 2 and 3.
For convenience, we define "bucket" here: a bucket is a set that contains vertices with particular ranks.
WeIn create some buckets and put vertices into the buckets according to their ranks inductively. That isgeneral, vertices with rank 0 go into the zeroth bucket, vertices with rank 1 go into the first bucket, vertices with ranks 2 and 3 go into the second bucket. Ifif the {{mvar|B}}-th bucket contains vertices with ranks from interval <math>\left[r, 2^r - 1\right] = [r, R - 1]</math>, then the (B+1)st bucket will contain vertices with ranks from interval <math>\left[R, 2^R - 1\right].</math>
 
 
For <math>B \in \mathbb{N}</math>, let <math>\text{tower}(B) = \underbrace{2^{2^{\cdots^2}}}_{B \text{ times}}</math>. Then
bucket <math>B</math> will have vertices with ranks in the interval <math>[\text{tower}(B-1), \text{tower}(B)-1]</math>.
 
We create some buckets and put vertices into the buckets according to their ranks inductively. That is, vertices with rank 0 go into the zeroth bucket, vertices with rank 1 go into the first bucket, vertices with ranks 2 and 3 go into the second bucket. If the {{mvar|B}}-th bucket contains vertices with ranks from interval <math>\left[r, 2^r - 1\right] = [r, R - 1]</math> then the (B+1)st bucket will contain vertices with ranks from interval <math>\left[R, 2^R - 1\right].</math>
[[File:Proof_of_O(log*n)_Union_Find.jpg|center|frame|Proof of <math>O(\log^*n)</math> Union Find]]
We can make two observations about the buckets' sizes.
 
# {{anchor|max buckets}}The total number of buckets is at most {{math|log<sup>*</sup>''n''}}.
#: Proof: WhenSince weno govertex fromcan onehave bucketrank togreater thethan next<math>n</math>, weonly addthe onefirst more<math>\log^* two(n)</math> tobuckets the power,can thathave isvertices, the next bucket towhere <math>\left[B, 2log^B - 1\right]*</math> willdenotes bethe inverse of the <math>\left[2^B, 2^text{2^Btower} - 1\right]</math> function defined above.
# {{anchor|max bucket size}}The maximum number of elements in bucket <math>\left[B, 2^B - 1\right]</math> is at most <math>\frac{2n}{2^B}</math>.
#: Proof: The maximum number of elements in bucket <math>\left[B, 2^B - 1\right]</math> is at most <math>\frac{n}{2^B} + \frac{n}{2^{B+1}} + \frac{n}{2^{B+2}} + \cdots + \frac{n}{2^{2^B - 1}} \leq \frac{2n}{2^B}.</math>
 
Line 253 ⟶ 265:
The regular implementation as disjoint-set forests does not react favorably to the deletion of elements,
in the sense that the time for <code>Find</code> will not improve as a result of the decrease in the number of elements. However, there exist modern implementations that allow for constant-time deletion and where the time-bound for <code>Find</code> depends on the ''current'' number of elements<ref>{{cite journal |last1=Alstrup |first1=Stephen |last2=Thorup |first2=Mikkel |last3=Gørtz |first3=Inge Li |last4=Rauhe |first4=Theis |last5=Zwick |first5=Uri |title=Union-Find with Constant Time Deletions |journal= ACM Transactions on Algorithms|date=2014 |volume=11 |issue=1 |pages=6:1–6:28|doi=10.1145/2636922 |s2cid=12767012 }}</ref><ref>{{cite journal |last1=Ben-Amram |first1=Amir M. |last2=Yoffe |first2=Simon |title=A simple and efficient Union-Find-Delete algorithm |journal=Theoretical Computer Science |date=2011 |volume=412 |issue=4–5 |pages=487–492|doi=10.1016/j.tcs.2010.11.005 }}</ref>
 
==Backtracking==
It is possible to extend certain disjoint-set forest structures to allow backtracking. The basic form of backtracking is to allow a
<code>Backtrack(1)</code> operation, that undoes the last <code>Union</code>. A more advanced form allows <code>Backtrack(i)</code>,
which undoes the last i unions. The following complexity result is known: there is a data structure which supports <code>Union</code>
and <code>Find</code> in <math>O(\log n / \log \log n)</math> time per operation, and <code>Backtrack</code> in <math>O(1)</math> time.<ref name="WT">{{Cite journal
| last1 = Westbrook
| first1 = Jeffery R.
| last2 = Tarjan
| first2 = Robert E.
| title = Amortized Analysis of Algorithms for Set Union with Backtracking
| journal = SIAM Journal on Computing
| volume = 18
| issue = 1
| pages = 1-11
| date =
| year = 1989
| doi = 10.1137/0218001
| pmid =
| url =
}}</ref>. In this result, the freedom of <code>Union</code> to choose the representative of the formed set is essential.
Better amortized time cannot be achieved within the class of separable [[pointer algorithm]]s<ref name="WT"/>.
 
== Applications ==