Content deleted Content added
Typo |
m →Merging two sets: {{multiple image}} |
||
(7 intermediate revisions by 3 users not shown) | |||
Line 21:
== History ==
Disjoint-set forests were first described by [[Bernard A. Galler]] and [[Michael J. Fischer]] in 1964.<ref name="Galler1964">{{cite journal|first1=Bernard A.|last1=Galler|author1-link=Bernard A. Galler|first2=Michael J.|last2=Fischer|author2-link=Michael J. Fischer|title=An improved equivalence algorithm|journal=[[Communications of the ACM]]|volume=7|issue=5|date=May 1964|pages=301–303|doi=10.1145/364099.364331|s2cid=9034016 |doi-access=free}}. The paper originating disjoint-set forests.</ref> In 1973, their time complexity was bounded to <math>O(\log^{*}(n))</math>, the [[iterated logarithm]] of <math>n</math>, by [[John Hopcroft|Hopcroft]] and [[Jeffrey Ullman|Ullman]].<ref name="Hopcroft1973">{{cite journal|last1=Hopcroft|first1=J. E.|author1-link=John Hopcroft|last2=Ullman|first2=J. D.|author2-link=Jeffrey Ullman|year=1973|title=Set Merging Algorithms|journal=SIAM Journal on Computing|volume=2|issue=4|pages=294–303|doi=10.1137/0202024}}</ref> In 1975, [[Robert Tarjan]] was the first to prove the <math>O(m\alpha(n))</math> ([[Ackermann function#Inverse|inverse Ackermann function]]) upper bound on the algorithm's time complexity
In 1991, Galil and Italiano published a survey of data structures for disjoint-sets.<ref name="Galil1991">{{cite journal|first1=Z.|last1=Galil|first2=G.|last2=Italiano|title=Data structures and algorithms for disjoint set union problems|journal=ACM Computing Surveys|volume=23|issue=3|pages=319–344|year=1991|doi=10.1145/116873.116878|s2cid=207160759 }}</ref>
Line 29:
In 2007, Sylvain Conchon and Jean-Christophe Filliâtre developed a semi-[[persistent data structure|persistent]] version of the disjoint-set forest data structure and formalized its correctness using the [[proof assistant]] [[Coq (software)|Coq]].<ref name="Conchon2007">{{cite conference|first1=Sylvain|last1=Conchon|first2=Jean-Christophe|last2=Filliâtre|contribution=A Persistent Union-Find Data Structure|title=ACM SIGPLAN Workshop on ML|___location=Freiburg, Germany|date=October 2007|url=https://www.lri.fr/~filliatr/puf/}}</ref> "Semi-persistent" means that previous versions of the structure are efficiently retained, but accessing previous versions of the data structure invalidates later ones. Their fastest implementation achieves performance almost as efficient as the non-persistent algorithm. They do not perform a complexity analysis.
Variants of disjoint-set data structures with better performance on a restricted class of problems have also been considered. Gabow and Tarjan showed that if the possible unions are restricted in certain ways, then a truly linear time algorithm is possible.<ref>Harold N. Gabow, Robert Endre Tarjan, "A linear-time algorithm for a special case of disjoint set union," Journal of Computer and System Sciences, Volume 30, Issue 2, 1985, pp. 209–221, ISSN 0022-0000, https://doi.org/10.1016/0022-0000(85)90014-5</ref> In particular, linear time is achievable if a "union tree" is given a priori. This is a tree that includes all elements of the sets. Let p[''v''] denote the parent in the tree, then the assumption is that union operations must have the form '''union'''(''v'',p[''v'']) for some ''v''.
== Representation ==
In this and the following section we describe the most common implementation of the disjoint-set data structure, as a forest of
[[parent pointer tree]]s. This representation is known as '''Galler-Fischer trees'''.
Each node in a disjoint-set forest consists of a pointer and some auxiliary information, either a size or a rank (but not both). The pointers are used to make [[parent pointer tree]]s, where each node that is not the root of a tree points to its parent. To distinguish root nodes from others, their parent pointers have invalid values, such as a circular reference to the node or a sentinel value. Each tree represents a set stored in the forest, with the members of the set being the nodes in the tree. Root nodes provide set representatives: Two nodes are in the same set if and only if the roots of the trees containing the nodes are equal.
Line 117 ⟶ 120:
=== Merging two sets ===
{{multiple image
| direction = vertical
[[File:Dsu disjoint sets init.svg|thumb|360px|<code>MakeSet</code> creates 8 singletons.]]▼
| width =360
[[File:Dsu disjoint sets final.svg|thumb|360px|After some operations of <code>Union</code>, some sets are grouped together.]]▼
▲
▲
}}
The operation <code>Union(''x'', ''y'')</code> replaces the set containing {{mvar|x}} and the set containing {{mvar|y}} with their union. <code>Union</code> first uses <code>Find</code> to determine the roots of the trees containing {{mvar|x}} and {{mvar|y}}. If the roots are the same, there is nothing more to do. Otherwise, the two trees must be merged. This is done by either setting the parent pointer of {{mvar|x}}'s root to {{mvar|y}}'s, or setting the parent pointer of {{mvar|y}}'s root to {{mvar|x}}'s.
The choice of which node becomes the parent has consequences for the complexity of future operations on the tree. If it is done carelessly, trees can become excessively tall. For example, suppose that <code>Union</code> always made the tree containing {{mvar|x}} a subtree of the tree containing {{mvar|y}}. Begin with a forest that has just been initialized with elements <math>1, 2, 3, \ldots, n,</math> and execute <code>
In an efficient implementation, tree height is controlled using '''union by size''' or '''union by rank'''. Both of these require a node to store information besides just its parent pointer. This information is used to decide which root becomes the new parent. Both strategies ensure that trees do not become too deep.
Line 184 ⟶ 190:
It is clear from the above implementations that the size and rank of a node do not matter unless a node is the root of a tree. Once a node becomes a child, its size and rank are never accessed again.
There is a variant of the <code>Union</code> operation in which the user determines the representative of the formed set. It is not hard to add this functionality to the above algorithms without losing efficiency.
== Time complexity ==
Line 212 ⟶ 220:
{{math proof| From [[#min subtree size lemma|lemma 2]], we know that a node {{mvar|u}} which is root of a subtree with rank {{mvar|r}} has at least <math>2^r</math> nodes. We will get the maximum number of nodes of rank {{mvar|r}} when each node with rank {{mvar|r}} is the root of a tree that has exactly <math>2^r</math> nodes. In this case, the number of nodes of rank {{mvar|r}} is <math>\frac{n}{2^r}.</math>}}
At any particular point in the execution, we can group the vertices of the graph into "buckets", according to their rank. We define the buckets' ranges inductively, as follows: Bucket 0 contains vertices of rank 0. Bucket 1 contains vertices of rank 1. Bucket 2 contains vertices of ranks 2 and 3.
In general, if the {{mvar|B}}-th bucket contains vertices with ranks from interval <math>\left[r, 2^r - 1\right] = [r, R - 1]</math>, then the (B+1)st bucket will contain vertices with ranks from interval <math>\left[R, 2^R - 1\right].</math>
Line 220 ⟶ 228:
[[File:Proof_of_O(log*n)_Union_Find.jpg|center|frame|Proof of <math>O(\log^*n)</math> Union Find]]
We can make two observations about the buckets'
# {{anchor|max buckets}}The total number of buckets is at most {{math|log<sup>*</sup>''n''}}.
Line 257 ⟶ 265:
The regular implementation as disjoint-set forests does not react favorably to the deletion of elements,
in the sense that the time for <code>Find</code> will not improve as a result of the decrease in the number of elements. However, there exist modern implementations that allow for constant-time deletion and where the time-bound for <code>Find</code> depends on the ''current'' number of elements<ref>{{cite journal |last1=Alstrup |first1=Stephen |last2=Thorup |first2=Mikkel |last3=Gørtz |first3=Inge Li |last4=Rauhe |first4=Theis |last5=Zwick |first5=Uri |title=Union-Find with Constant Time Deletions |journal= ACM Transactions on Algorithms|date=2014 |volume=11 |issue=1 |pages=6:1–6:28|doi=10.1145/2636922 |s2cid=12767012 }}</ref><ref>{{cite journal |last1=Ben-Amram |first1=Amir M. |last2=Yoffe |first2=Simon |title=A simple and efficient Union-Find-Delete algorithm |journal=Theoretical Computer Science |date=2011 |volume=412 |issue=4–5 |pages=487–492|doi=10.1016/j.tcs.2010.11.005 }}</ref>
==Backtracking==
It is possible to extend certain disjoint-set forest structures to allow backtracking. The basic form of backtracking is to allow a
<code>Backtrack(1)</code> operation, that undoes the last <code>Union</code>. A more advanced form allows <code>Backtrack(i)</code>,
which undoes the last i unions. The following complexity result is known: there is a data structure which supports <code>Union</code>
and <code>Find</code> in <math>O(\log n / \log \log n)</math> time per operation, and <code>Backtrack</code> in <math>O(1)</math> time.<ref name="WT">{{Cite journal
| last1 = Westbrook
| first1 = Jeffery R.
| last2 = Tarjan
| first2 = Robert E.
| title = Amortized Analysis of Algorithms for Set Union with Backtracking
| journal = SIAM Journal on Computing
| volume = 18
| issue = 1
| pages = 1-11
| date =
| year = 1989
| doi = 10.1137/0218001
| pmid =
| url =
}}</ref>. In this result, the freedom of <code>Union</code> to choose the representative of the formed set is essential.
Better amortized time cannot be achieved within the class of separable [[pointer algorithm]]s<ref name="WT"/>.
== Applications ==
|