Content deleted Content added
m Reverted edit(s) by 217.164.115.227 identified as test/vandalism using STiki |
GreenC bot (talk | contribs) Reformat 1 archive link. Wayback Medic 2.5 per Category:All articles with dead external links - pass 5 |
||
(26 intermediate revisions by 22 users not shown) | |||
Line 1:
{{More citations needed|date=June 2019}}
In [[constraint satisfaction]], a '''decomposition method''' translates a [[constraint satisfaction problem]] into another constraint satisfaction problem that is binary and [[directed acyclic graph|acyclic]]. Decomposition methods work by grouping variables into sets, and solving a subproblem for each set. These translations are done because solving binary acyclic problems is a [[tractable problem]].
Line 9 ⟶ 11:
By definition, a decomposition method produces a binary acyclic problem; such problems can be solved in time polynomial in its size. As a result, the original problem can be solved by first translating it and then solving the resulting problem; however, this algorithm is polynomial-time only if the decomposition does not increase size superpolynomially. The ''width'' of a decomposition method is a measure of the size of problem it produced. Originally, the width was defined as the maximal cardinality of the sets of original variables; one method, the hypertree decomposition, uses a different measure. Either way, the width of a decomposition is defined so that decompositions of size bounded by a constant do not produce excessively large problems. Instances having a decomposition of fixed width can be translated by decomposition into instances of size bounded by a polynomial in the size of the original instance.
The width of a problem is the width of its minimal-width decomposition. While decompositions of fixed width can be used to efficiently solve a problem, a bound on the width of instances does necessarily produce a [[tractable problem|tractable]] [[structural restriction]]. Indeed, a fixed width problem has a decomposition of fixed width, but finding it may not be polynomial. In order for a problem of fixed width being efficiently solved by decomposition, one of its decompositions of low width has to be found efficiently. For this reason, decomposition methods and their associated width are defined in such a way not only solving the problem given a fixed-width decomposition of it is polynomial-time, but also finding a fixed width decomposition of a fixed-width problem is polynomial-time.
==Decomposition methods==
Line 15 ⟶ 17:
Decomposition methods create a problem that is easy to solve from an arbitrary one. Each variable of this new problem is associated to a set of original variables; its ___domain contains tuples of values for the variables in the associated set; in particular, these are the tuples that satisfy a set of constraints over these variables. The constraints of the new problem bounds the values of two new variables to have as values two tuples that agree on the shared original variables. Three further conditions ensure that the new problem is equivalent to the old one and can be solved efficiently.
In order for the new problem to be solvable efficiently, the [[primal constraint graph|primal graph]] of the new problem is required to be acyclic. In other words, viewing the variables as vertices and the (binary) constraints as edges, the resulting graph is required to be a [[Tree (graph theory)|tree]] or a [[Forest (graph theory)|forest]].
In order for the new problem to be equivalent to the old one, each original constraint is enforced as part of the definition of the ___domain of at least one new variables. This requires that, for each constraint of the old problem, there exists a variable of the new problem such that its associated set of original variables include the scope of the constraint, and all tuples in its ___domain satisfy the constraint.
Line 23 ⟶ 25:
{| cellpadding=10 style="border: gray thin solid;"
|-
| [[Image:Tree-decomposition-1-corrected.svg]]
| An example constraint satisfaction problem; this problem is binary, and the constraints are represented by edges of this graph.
|-
Line 33 ⟶ 35:
|-
| [[Image:Tree-decomposition-4.svg]]
| Each node of the tree is made a variable. Its ___domain is the set of solutions of the partial problem, which is a set of tuples. The constraints of the new problem allow only the tuples that contain equal values of the original variables. In the figure, equality of <math>y</math> is shown: the corresponding constraint is only met by the first tuple of <math>(x,y,z)</math> and the first tuple of <math>(y,w)</math>, and by the second tuple of <math>(x,y,z)</math> and the second tuple of <math>(y,w)</math>. Moreover, the second tuple of<math> (x,y,z)</math> cannot find a satisfied tuple in its left child (<math> (u,x,z)</math>). Thus, the second tuple of <math>(u,x,z)</math> should be removed.
|}
Line 77 ⟶ 79:
[[Image:Cutset-4.svg|right|frame|Choosing the node b as the root, this is the tree similar to the ones created by the other decomposition methods]]
Such a decomposition does not fit in the scheme of the other decompositions because the result of the decomposition is not a tree, but rather a set of variables (those of the cutset) and a tree (formed by the variables not in the cutset). However, a tree like those generated by the other decomposition methods can be obtained from the tree that results from removing the cutset; this is done by choosing a root, adding all variables of the cutset to all its nodes, and the variables of each node to all its children. This results in a tree whose maximal number of variables associated with a node is equal to the size of the cutset plus two. Apart from the addition of two, this is the width of the decomposition, which is defined as the number of variables in the considered cutset.
Unfortunately, determining the minimum set to remove is [[Feedback vertex set|an NP-Hard problem]].
===Tree decomposition===
[[Tree decomposition]] is a well-known concept from graph theory. Reformulated in terms of binary constraints, a tree decomposition is a tree whose nodes are associated to sets of variables; the scope of each constraint is contained in set of variables of some node, and the subtree of nodes associated to each variable is connected. This is the most general form of decomposition for binary constraints that follows the scheme outlined above, as the conditions imposed on the tree are only the ones that are necessary to guarantee equivalent of the original and new problem.
The width of such a decomposition is the maximal number of variables associated to the same node minus one. The [[treewidth]] of a problem is the minimal width of its tree decompositions.
Line 94 ⟶ 98:
===Biconnected components===
The biconnected decomposition of an arbitrary constraint satisfaction problem is the biconnected decomposition of its
===Tree decomposition===
A tree decomposition of an arbitrary constraint satisfaction problem is a tree decomposition of its
===Cycle hypercutset===
This is the same
===Hinge decomposition===
The definition of hinge is as follows. Let <math>H</math> be a set of hyperedges. A path
Hinges are defined for reduced hypergraphs, which are hypergraphs where no hyperedge is contained in another. A set of at least two edges <math>H</math> is a hinge if, for every connected component <math>F</math>
The maximal number of constraints of a node is the same for all hinge decompositions of the same problem. This number is called the ''degree of cyclicity'' of the problem or its hingewidth.
Line 139 ⟶ 143:
|}
This translation requires the maximal cliques of a chordal graph to be identified. However, this can
The problem that results from this process has a join tree, and such a join tree can be obtained by using the same ordering of variables again. Proceeding from the last node to the first, every constraint is connected with the preceding constraint that shares more variables with it. Join-tree clustering can be seen as a decomposition method in which:
Line 155 ⟶ 159:
===Query decomposition===
Query decomposition associates a set of variables and a set of constraints to each node of a tree; each constraint is associated to some node, and the subtree induced by the nodes associated to a given variable or constraint is connected. More precisely, for each variable, the subtree of nodes associated to this variable or with a constraint having this variable in its scope is connected. The width of a decomposition is the maximal combined number of variables and constraints associated with a node.
Associating constraints with nodes possibly reduces the width of decompositions and of instances. On the other hand, this definition of width still allows problems of fixed width to be solved in polynomial time if the decomposition is given. In this case, the ___domain of a new variable is obtained by solving a subproblem which can be polynomially large but has a fixed number of constraints. As a result, this ___domain is guaranteed to be of polynomial size; the constraints of the new problem, being equalities of two domains, are polynomial in size as well.
Line 215 ⟶ 219:
In a tree, every edge breaks the graph in two parts. The constraint passed along an edge tells how the part of the originating end of the edge affects the variables of the destination node. In other words, a constraint passed from node <math>i</math> to node <math>j</math> tells how the nodes "on the side of <math>i</math>" constrain the variables of node <math>j</math>.
If the variables of these two nodes are <math>X_i</math> and <math>X_j</math>, the nodes on the
The algorithm proceeds from the leaves of the tree. In each node, the summaries of its children (if any) are collected. These summary and the constraint of the node are used to generate the summary of the node for its parent. When the root is reached, the process is inverted: the summary of each node for each child is generated and sent it. When all leaves are reached, the algorithm stops.
Line 242 ⟶ 246:
A problem with this algorithm is that the constraints passed between nodes can be of size exponential in the size of the separator. The memory required for storing these constraints can be decreased by using a tree decomposition with small separators. Such decomposition trees may however have width (number of nodes in each node) larger than optimal.
For a given decomposition tree, a fixed maximal allowed separator size can be enforced by joining all pairs of nodes whose separator is larger than this size. Merging two nodes usually produces a node with an associated set of variables larger than those of the two nodes. This may increase the width of the tree. However, this merging does not change the separators of the tree other than removing the separator between the two merged nodes.
The latter is a consequence of acyclicity: two joined nodes cannot be joined to the same other node. If <math>n_1</math> and <math>n_2</math> are two nodes to be merged and <math>N_1</math> and <math>N_2</math> are the sets of nodes joined to them, then <math>N_1 \cap N_2=\emptyset</math>, as otherwise there would be cycle in the tree. As a result, the node obtained by merging <math>n_1</math> and <math>n_2</math> will be joined to each of the nodes of <math>N_1 \cup N_2</math>. As a result, the separators of this merged node are exactly the separators of the two original nodes.
Line 257 ⟶ 261:
An early structural restriction (that later evolved into that based on induced width) is based on the width of the primal graph of the problem. Given an ordering of the nodes of the graph, the width of a node is the number of nodes that join it and precede it in the order. However, restricting only the width does not lead to a tractable restriction: even restricting this width to 4, establishing satisfiability remains [[NP-complete]]. Tractability is obtained by restricting the relations; in particular, if a problem has width <math>k</math> and is strongly <math>k</math>-consistent, it is efficiently solvable. This is a restriction that is neither structural nor relational, as it depends on both the scopes and the relations of the constraints.
==See also==
*[[Tree decomposition|Tree Decomposition]] in Graph Theory
*[[Junction tree algorithm|Junction Tree Algorithm]] used in machine learning to extract marginalization in general graphs.
==Online Resources==
Here are some links to online resources for tree/hypertree decomposition in general.
# [http://www.cs.uu.nl/
# [https://web.archive.org/web/20060512022851/http://www.ics.uci.edu/~vgogate/ A C++ implementation] used in the paper "A complete [[Anytime Algorithm]] for Treewidth, Vibhav Gogate and Rina Dechter, UAI 2004." [https://web.archive.org/web/20060512022851/http://www.ics.uci.edu/~vgogate/ The link] is to the author homepage, where both LINUX source and Windows executable is distributed.
# [http://www.dbai.tuwien.ac.at/proj/hypertree/downloads.html An implementation of Hypertree Decomposition], using several heuristics.
# [http://carlit.toulouse.inra.fr/cgi-bin/awki.cgi/ToolBarIntro Toolbar tool has implementation of some tree decomposition heuristics]
Line 272 ⟶ 280:
| first=Rina
| last=Dechter
|authorlink = Rina Dechter
| title=Constraint Processing
| publisher=Morgan Kaufmann
| year=2003
| url=http://www.ics.uci.edu/~dechter/books/index.html
}} {{ISBN
*{{cite book
|
|
|author1link = Rod Downey
| coauthors=Michael Fellows▼
|first2=Michael
|last2 = Fellows
| title=Parameterized complexity
| publisher=Springer
| year=1997
| url=
}} {{ISBN
*{{cite conference
| first=Georg
| last=Gottlob
|author1link = Georg Gottlob
|first2=Nicola|last2 = Leone
| title=Hypertree Decompositions: A Survey▼
|author2link = Nicola Leone
|first3=Francesco| last3 = Scarcello
▲ | title=Hypertree Decompositions: A Survey
| book-title=[[International Symposium on Mathematical Foundations of Computer Science|MFCS]] 2001
| pages=37–57
| year=2001
| url=http://www.springerlink.com/(rqc54x55rqwetq55eco03ymp)/app/home/contribution.asp?referrer=parent&backto=issue,5,61;journal,1765,3346;linkingpublicationresults,1:105633,1
}}{{dead link|date=January 2025|bot=medic}}{{cbignore|bot=medic}}
}}▼
*{{cite journal
| first=Georg
| last=Gottlob
|
| title=A comparison of structural CSP decomposition methods
| journal=[[Artificial Intelligence (journal)|Artificial Intelligence]]
| volume=124
| issue=2
| pages=243–282
| year=2000
| doi=10.1016/S0004-3702(00)00078-3
▲}}
[[Category:Constraint programming]]
|