Content deleted Content added
→Augmenting paths: correct mistake |
m →Explanation: Formatting. |
||
(32 intermediate revisions by 20 users not shown) | |||
Line 1:
{{Short description|Algorithm for maximum cardinality matching}}
{{Infobox algorithm|image = |data = [[Graph (data structure)|Graph]]|time = <math>O(E \sqrt V)</math>|class = Graph algorithm|space = <math>O(V)</math>}}In [[computer science]], the '''Hopcroft–Karp algorithm''' (sometimes more accurately called the '''Hopcroft–Karp–Karzanov algorithm''')<ref>{{harvtxt|Gabow|2017}}; {{harvtxt|Annamalai|2018}}</ref> is an [[algorithm]] that takes as input a [[bipartite graph]] and produces as output a [[maximum cardinality matching]] – a set of as many edges as possible with the property that no two edges share an endpoint. It runs in <math>O(|E|\sqrt{|V|})</math> time in the [[worst case analysis|worst case]], where <math>E</math> is set of edges in the graph, <math>V</math> is set of vertices of the graph, and it is assumed that <math>|E|=\Omega(|V|)</math>. In the case of [[dense graph]]s the time bound becomes <math>O(|V|^{2.5})</math>, and for sparse [[random graph]]s it runs in near-linear (in |E|) time{{reference needed|date=April 2020}}.▼
{{Infobox algorithm|image = |data = [[Graph (data structure)|Graph]]|time = <math>O(E \sqrt V)</math>|class = Graph algorithm|space = <math>O(V)</math>}}
▲
The algorithm was
The Hopcroft–Karp algorithm can be seen as a special case of [[Dinic's algorithm]] for the [[maximum
==Augmenting paths==
Line 12 ⟶ 14:
Conversely, suppose that a matching <math>M</math> is not optimal, and let <math>P</math> be the symmetric difference <math>M \oplus M^*</math> where <math>M^*</math> is an optimal matching. Because <math>M</math> and <math>M^*</math> are both matchings, every vertex has degree at most 2 in <math>P</math>. So <math>P</math> must form a collection of disjoint cycles, of paths with an equal number of matched and unmatched edges in <math>M</math>, of augmenting paths for <math>M</math>, and of augmenting paths for <math>M^*</math>; but the latter is impossible because <math>M^*</math> is optimal. Now, the cycles and the paths with equal numbers of matched and unmatched vertices do not contribute to the difference in size between <math>M</math> and <math>M^*</math>, so this difference is equal to the number of augmenting paths for <math>M</math> in <math>P</math>. Thus, whenever there exists a matching <math>M^*</math> larger than the current matching <math>M</math>, there must also exist an augmenting path. If no augmenting path can be found, an algorithm may safely terminate, since in this case <math>M</math> must be optimal.
An augmenting path in a matching problem is closely related to the [[augmenting path]]s arising in [[maximum flow problem]]s, paths along which one may increase the amount of flow between the terminals of the flow. It is possible to transform the bipartite matching problem into a maximum flow instance, such that the alternating paths of the matching problem become augmenting paths of the flow problem. It suffices to insert two vertices, source and sink, and insert edges of unit capacity from the source to each vertex in <math>U</math>, and from each vertex in <math>V</math> to the sink; and let edges from <math>U</math> to <math>V</math> have unit capacity.<ref>{{harvtxt|Ahuja|Magnanti|Orlin|1993}}, section 12.3, bipartite cardinality matching problem, pp. 469–470.</ref> A generalization of the technique used in Hopcroft–Karp algorithm to find maximum flow in an arbitrary
==Algorithm==
Line 29 ⟶ 31:
* A [[breadth-first search]] partitions the vertices of the graph into layers. The free vertices in <math>U</math> are used as the starting vertices of this search and form the first layer of the partitioning. At the first level of the search, there are only unmatched edges, since the free vertices in <math>U</math> are by definition not adjacent to any matched edges. At subsequent levels of the search, the traversed edges are required to alternate between matched and unmatched. That is, when searching for successors from a vertex in <math>U</math>, only unmatched edges may be traversed, while from a vertex in <math>V</math> only matched edges may be traversed. The search terminates at the first layer <math>k</math> where one or more free vertices in <math>V</math> are reached.
* All free vertices in <math>V</math> at layer <math>k</math> are collected into a set <math>F</math>. That is, a vertex <math>v</math> is put into <math>F</math> if and only if it ends a shortest augmenting path.
* The algorithm finds a maximal set of ''vertex disjoint'' augmenting paths of length <math>k</math>. (''Maximal'' means that no more such paths can be added. This is different from finding the ''maximum'' number of such paths, which would be harder to do. Fortunately, it is sufficient here to find a maximal set of paths.) This set may be computed by [[depth
* Every one of the paths found in this way is used to enlarge <math>M</math>.
Line 35 ⟶ 37:
==Analysis==
Each phase consists of a single breadth first search and a single depth
Therefore, the first <math>\sqrt{|V|}</math> phases, in a graph with <math>|V|</math> vertices and <math>|E|</math> edges, take time <math>O(|E|\sqrt{|V|})</math>.
Line 42 ⟶ 44:
Since the algorithm performs a total of at most <math>2\sqrt{|V|}</math> phases, it takes a total time of <math>O(|E|\sqrt{|V|})</math> in the worst case.
In many instances, however, the time taken by the algorithm may be even faster than this worst case analysis indicates. For instance, in the [[average case analysis|average case]] for [[sparse graph|sparse]] bipartite [[random graph]]s, {{harvtxt|Bast|Mehlhorn|
==Comparison with other bipartite matching algorithms==
Line 104 ⟶ 106:
=== Explanation ===
Let the vertices of our graph be partitioned in <code>U</code> and <code>V</code>, and consider a partial matching, as indicated by the <code>Pair_U</code> and <code>Pair_V</code> tables that contain the one vertex to which each vertex of <code>U</code> and of <code>V</code> is matched, or <code>NIL</code> for unmatched vertices. The key idea is to add two dummy vertices on each side of the graph: uDummy connected to all unmatched vertices in <code>U</code> and vDummy connected to all unmatched vertices in <code>V</code>. Now, if we run a [[breadth-first search]] (BFS) from uDummy to vDummy then we can get the paths of minimal length that connect currently unmatched vertices in <code>U</code> to currently unmatched vertices in <code>V</code>. Note that, as the graph is bipartite, these paths always alternate between vertices in <code>U</code> and vertices in <code>V</code>, and we require in our BFS that when going from <code>V</code> to <code>U</code>, we always select a matched edge. If we reach an unmatched vertex of <code>V</code>, then we end at vDummy and the search for paths in the BFS terminate. To summarize, the BFS starts at unmatched vertices in <code>U</code>, goes to all their neighbors in <code>V</code>, if all are matched then it goes back to the vertices in <code>U</code> to which all these vertices are matched (and which were not visited before), then it goes to all the neighbors of these vertices, etc., until one of the vertices reached in <code>V</code> is unmatched.
Observe in particular that BFS marks the unmatched nodes of <code>U</code> with distance 0, then increments the distance every time it comes back to <code>U</code>. This guarantees that the paths considered in the BFS are of minimal length to connect unmatched vertices of <code>U</code> to unmatched vertices of <code>V</code> while always going back from <code>V</code> to <code>U</code> on edges that are currently part of the matching. In particular, the special <code>NIL</code> vertex, which corresponds to vDummy, then gets assigned a finite distance, so the BFS function returns true iff some path has been found. If no path has been found, then there are no augmenting paths left and the matching is maximal.
If BFS returns true, then we can go ahead and update the pairing for vertices on the minimal-length paths found from <code>U</code> to <code>V</code>: we do so using a [[depth-first search]] (DFS). Note that each vertex in <code>V</code> on such a path, except for the last one, is currently matched. So we can explore with the DFS, making sure that the paths that we follow correspond to the distances computed in the BFS. We update along every such path by removing from the matching all edges of the path that are currently in the matching, and adding to the matching all edges of the path that are currently not in the matching: as this is an augmenting path (the first and last edges of the path were not part of the matching, and the path alternated between matched and unmatched edges), then this increases the number of edges in the matching. This is same as replacing the current matching by the symmetric difference between the current matching and the entire path..
Note that the code ensures that all augmenting paths that we consider are vertex disjoint. Indeed, after doing the symmetric difference for a path, none of its vertices could be considered again in the DFS, just because the <code>Dist[Pair_V[v]]</code> will not be equal to <code>Dist[u] + 1</code> (it would be exactly <code>Dist[u]</code>).
Also observe that the DFS does not visit the same vertex multiple times. This is thanks to the following lines:
Dist[u] = ∞
return false
When we were not able to find any shortest augmenting path from a vertex <code>u</code>, then the DFS marks vertex <code>u</code> by setting <code>Dist[u]</code> to infinity, so that these vertices are not visited again.
One last observation is that we actually don't need uDummy: its role is simply to put all unmatched vertices of <code>U</code> in the queue when we start the BFS. As for vDummy, it is denoted as <code>NIL</code> in the pseudocode above.
== See also ==
Line 128 ⟶ 130:
==References==
{{refbegin}}
*{{citation|first1=Ravindra K.|last1=Ahuja|author1-link=Ravindra K. Ahuja|first2=Thomas L.|last2=Magnanti|author2-link=Thomas L. Magnanti|first3=James B.|last3=Orlin|author3-link=James B. Orlin|title=Network Flows: Theory, Algorithms and Applications|publisher=Prentice-Hall|year=1993}}.
*{{citation|first1=H.|last1=Alt|first2=N.|last2=Blum|first3=K.|last3=Mehlhorn|author3-link=Kurt Mehlhorn|first4=M.|last4=Paul|title=Computing a maximum cardinality matching in a bipartite graph in time <math>\scriptstyle O\left(n^{1.5}\sqrt{\frac{m}{\log n}}\right)</math>|journal=Information Processing Letters|volume=37|issue=4|pages=237–240|year=1991|doi=10.1016/0020-0190(91)90195-N}}.
*{{citation|last=Annamalai|first=Chidambaram|doi=10.1007/s00493-017-3567-2|issue=6|journal=Combinatorica|mr=3910876|pages=1285–1307|title=Finding perfect matchings in bipartite hypergraphs|volume=38|year=2018|arxiv=1509.07007|s2cid=1997334}}
*{{citation
| last1 = Bast | first1 = Holger
| last2 = Mehlhorn | first2 = Kurt
| last3 = Schäfer | first3 = Guido
| last4 = Tamaki | first4 = Hisao
| doi = 10.1007/s00224-005-1254-y
| issue = 1
| journal = Theory of Computing Systems
| mr = 2189556
| pages = 3–14
| title = Matching algorithms are fast in sparse random graphs
| volume = 39
| year = 2006| s2cid = 9321036
| citeseerx = 10.1.1.395.6643
}}
*{{citation|first1=S. Frank|last1=Chang|first2=S. Thomas|last2=McCormick|title=A faster implementation of a bipartite cardinality matching algorithm|publisher=Tech. Rep. 90-MSC-005, Faculty of Commerce and Business Administration, Univ. of British Columbia|year=1990}}. As cited by {{harvtxt|Setubal|1996}}.
*{{citation|first=Kenneth|last=Darby-Dowman|title=The exploitation of sparsity in large scale linear programming problems – Data structures and restructuring algorithms|publisher=Ph.D. thesis, Brunel University |year=1980}}. As cited by {{harvtxt|Setubal|1996}}.
*{{citation|last=Dinitz|first=Yefim|editor1-last=Goldreich|editor1-first=Oded|editor1-link=Oded Goldreich |editor2-last=Rosenberg|editor2-first=Arnold L.|editor2-link=Arnold L. Rosenberg|editor3-last=Selman |editor3-first=Alan L. |editor3-link=Alan Selman|contribution=Dinitz' Algorithm: The Original Version and Even's Version|url=https://www.cs.bgu.ac.il/~dinitz/Papers/Dinitz_alg.pdf|doi=10.1007/11685654_10|___location=Berlin and Heidelberg|pages=218–240|publisher=Springer |series=Lecture Notes in Computer Science |title=Theoretical Computer Science: Essays in Memory of Shimon Even|volume=3895|year=2006|isbn=978-3-540-32880-3 }}.
*{{citation |
*{{citation|last=Gabow|first=Harold N.|author-link=Harold N. Gabow|doi=10.3233/FI-2017-1555|issue=1–4|journal=Fundamenta Informaticae|mr=3690573|pages=109–130|title=The weighted matching approach to maximum cardinality matching|volume=154|year=2017|arxiv=1703.03998|s2cid=386509}}
*{{citation|first1=Harold N.|last1=Gabow|author1-link=Harold N. Gabow|first2=Robert E.|last2=Tarjan|author2-link=Robert Tarjan|title=Faster scaling algorithms for general graph matching problems|journal=Journal of the ACM|volume=38|issue=4|year=1991|pages=815–853|doi=10.1145/115234.115366|s2cid=18350108|doi-access=free}}.
*{{citation|first1=John E.|last1=Hopcroft|author1-link=John Hopcroft|first2=Richard M.|last2=Karp|author2-link=Richard Karp|title=An ''n''<sup>5/2</sup> algorithm for maximum matchings in bipartite graphs|journal=SIAM Journal on Computing|volume=2|issue=4|pages=225–231|year=1973|doi=10.1137/0202019}}. Previously announced at the 12th Annual Symposium on Switching and Automata Theory, 1971.
*{{citation|first=A. V.|last=Karzanov|authorlink=Alexander V. Karzanov|title=An exact estimate of an algorithm for
*{{citation |
*{{citation |
*{{citation|first=Rajeev|last=Motwani|authorlink=Rajeev Motwani|title=Average-case analysis of algorithms for matchings and related problems|year=1994|journal=Journal of the ACM|volume=41|issue=6|pages=1329–1356|doi=10.1145/195613.195663|s2cid=2968208|doi-access=free}}.
*{{citation|last=Setubal|first=João C.|contribution=New experimental results for bipartite matching |title=Proc. Netflow93|publisher=Dept. of Informatics, Univ. of Pisa|pages=211–216|year=1993}}. As cited by {{harvtxt|Setubal|1996}}.
*{{citation|last=Setubal|first=João C.|title=Sequential and parallel experimental results with bipartite matching algorithms|publisher=Tech. Rep. IC-96-09, Inst. of Computing, Univ. of Campinas|year=1996 |citeseerx=10.1.1.48.3539}}.
*{{Cite book|last=Tarjan|first=Robert Endre|title=Data Structures and Network Algorithms|year=1983 |publisher=Society for Industrial and Applied Mathematics|isbn=978-0-89871-187-5|series=CBMS-NSF Regional Conference Series in Applied Mathematics|doi=10.1137/1.9781611970265}}
*{{citation|last=Vazirani|first=Vijay|title=An Improved Definition of Blossoms and a Simpler Proof of the MV Matching Algorithm|publisher= CoRR abs/1210.4594|year=2012|arxiv=1210.4594|bibcode=2012arXiv1210.4594V}}.
{{refend}}
{{DEFAULTSORT:Hopcroft-Karp algorithm}}
|