The '''dual problem''' is a reformulation of a [[constraint satisfaction problem]] expressing each constraint of the original problem as a variable. Dual problems only contain [[binary constraintsconstraint]]s, and are therefore solvable by [[algorithm]]s tailored for such problems. The '''join graphs''' and '''join trees''' of a constraint satisfaction problem are [[Graph (graph theory)|graph]]s representing its dual problem or a problem obtained from the dual problem removing some redundant constraints. A '''tree decomposition''' of a constraint satisfaction problem is a translation of a kind that generate problems whose representing graphs are trees.
==The dual problem==
The dual problem of a constraint satisfaction problem contains a variable for each constraint of the original problem. Its domains and constraints are buildbuilt so to enforce a sort of equivalence to the original problem. In particular, the ___domain of a variable of the dual problem contains one element for each [[tuple]] satisfying the corresponding original constraint. This way, a dual variable can take a value if and only if the corresponding original constraint is satisfied by the corresponding tuple.
The constraints of the dual problem forbid two dual variables to take values that correspond to two incompatible tuples. Without these constraints, one dual variable may take the value corresponding to the tuple <math>x=1,y=2</math> while another dual variable takes the value corresponding to <math>y=3,z=1</math>, which assigns a different value to <math>y</math>.
In the dual problem, all constraints are binary. They all enforce two values, which are tuples, to agree on one or more original variables.
The ''dual graph'' is a representation of how variables are constrained in the dual problem. More precisely, the [[dual graph]] contains a node for each dual variable and an edge for every constraint between them. In addition, the edge between two variables is labeled by the original variables that are enforced equal between these two dual variables.
The dual graph can be built directly from the original problem: it contains a vertex for each constraint, and an edge between every two constraints sharing variables; such an edge is labeled by these shared variables.
==Join graphs and join trees==
In the dual graph, some constraints may be unnecessary. Indeed, dual constraints enforces equality of original variables., Someand edgessome constraints may be redundant because of transitivity of equality. For example, if <math>C_2</math> and <math>C_1</math> are joined by an edge whose label contains <math>y</math>, and so are <math>C_1</math> and <math>C_3</math>, equality of <math>y</math> in all three dual variables is guaranteed. As a result, a dual constraint between <math>C_2</math> and <math>C_3</math> enforcing equality of <math>y</math> is not necessary, and could be removed if present.
For example, if <math>C_2</math> and <math>C_1</math> are joined by an edge whose label contains <math>y</math>, and so are <math>C_1</math> and <math>C_3</math>, equality of <math>y</math> in all three dual variables is enforced. As a result, a dual constraint between <math>C_2</math> and <math>C_3</math> enforcing equality of <math>y</math> is not necessary.
{| style="border: thin gray solid;"
|}
A graph obtained from the dual graph by removing some redundant edges is called a ''join graph''. If it is a tree, it is called a ''join tree''. The dual problem can be solved from a join graph since all removed edges are redundant. In turn, the problem can be solved efficiently if itthat hasjoin agraph joinis a tree, using algorithms tailored for acyclic constraint satisfaction problems.
Finding a join tree, if any, can be done exploiting the following property: if a dual graph has a join tree, then the maximal-weight [[spanning tree (mathematics)|spanning tree]]s of the graph are all join trees, if edges are weighted by the number of variables the corresponding constraints enforce to be equal. An algorithm for finding a join tree, if any, proceeds as follows. In the first step, edges are assigned weights: if two nodes represent constraints that share <math>n</math> variables, the edge joining them is assigned weight <math>n</math>. In the second step, a maximal-weight spanning tree is searched for. Once one is found, it is checked whether it enforces the required equality of variables. If this is the case, this spanning tree is a join tree.
Another method for finding out whether a constraint satisfaction problem has a join tree uses the primal graph of the problem, rather than the dual graph. The ''primal graph'' of a constraint satisfaction problem is a graph whose nodes are problem variables and whose edges represent the presence of two variables in the same constraint. A join tree for the problem exists if:
# the primal graph is [[Chordal graph|chordal]];
# the variables of every [[maximal clique]] of the primal graph are the scope of a constraint and vice versa; this property is called ''conformality''.
In turn, chordality can be checked using a [[max-cardinality ordering]] of the variables. Such an ordering can also be used, if the two conditions above are met, for finding a join tree of the problem. Ordering constraints by their highest variable according to the ordering, an algorithm for producing a join tree proceeds from the last to the first constraint; at each step, a constraint is connected to the constraint that shares a maximal number of variables with it among the constraints that preceed it in the ordering. ▼
==Join-tree clustering==
Not all constraint satisfaction problems have a join tree. However, problems can be modified to acquire a join tree. ''Join-tree clustering'' is a specific method, based on merging constraints, to make problem having a joint tree. Merging constraints typically increases the size of the problem, but solving a problem that has a join tree is easy for problems that have a join tree. ▼
Since a join tree exists for a problem if and only if the primal graph of the problem is [[Chordal graph|chordal]] and it is conformat with the problem, the existence of a join tree can be obtained by modifying the problem in such a way it satisfies these two conditions. This is what join-tree clustering does. Chordality is enforced by adding new edges in the primal graph. Conformality is obtained by replacing all constraints with new constraints.
In particular, chordality is enforced by adding some "fake" binary constraints to the problem. These are binary constraints satisfied by any pair of values, and are used only to add edges to the primal graph of the problem. In particular, chordality is obtained by adding edges producing the [[Ordered graph|induced graph]] of the primal graph, using an arbitrary ordering. Indeed, the induced graph is always chordal and is obtained adding edges to the original graph.
Conformality requires that the maximal cliques of the primal graph are exactly the scope of the constraints. While the scope of every original constraint is clique on the primal graph, this clique is not necessarily maximal. Moreover, even if it initially was maximal, enforcing chordality may create a larger clique.
Conformality is enforced by replacing all initial constraints with new ones. In particular, for every maximal clique of the graph resulting from enforcing chordality, a new constraint with the clique as scope is created. The satisfying values of this new constraint are the ones satisfying all original constraints whose scope is contained in the clique.
By this transformation, every original constraint is "included" in at least one new constraint. Indeed, the scope of every original constraint is a clique of the primal graph. This clique remains a clique even after enforcing chordality, as this process only introduces new edges. As a result, this clique either is maximal or is contained in a maximal clique.
This translation requires the maximal cliques of a chordal graph to be identified. However, this can bone easily using the same ordering used for enforcing chordality. Since the parents of each node are all connected to each other, the maximal cliques are composed of a node (the maximal node of the clique in a max-cardinality ordering) and all its parents. As a result, these maximal cliques can be detected by considering each node with its parents.
The problem that results from this process has a join tree. A join tree can be obtained by using the same ordering of variables again. Proceeding from the last node to the first, every constraint is connected with the preceeding constraint that shares more variables with it.
==Tree decomposition==
A ''tree decomposition'' of a constraint satisfaction problem is a translation of it into a binary problem whose primal graph (the graph in which edges represent constraints) is a tree. Tree decomposition can be seen as an extension of the translation from a problem to its dual. In a dual problem, every variable represents a constraint of the original problem. This fact can be reformulated in terms of variables: each variable of the dual problem represents a set of variables of the original problem that is the scope of a constraint. Tree decomposition extends this idea by allowing sets of variables that are not the scope of any constraint.
A tree decomposition of a problem is based on a cover of the set of the original variables. If <math>X</math> is the set of the original variables, one of its covers <math>G=\{X_1,\ldots,X_m\}</math> is needed. Each <math>X_i</math> will be a variable of the new problem. The following are required for the new problem to represent the original one:
* <math>G</math> is a cover of <math>X</math>, that is, <math>\cup G=X</math>; it is not required that <math>G</math> is a partition; on the contrary, most partitions do not satisfy the conditions below;
* every original constraint is "represented" somewhere in the new problem; for this to be possible, the scope of every constraint must be contained in at least one element of <math>G</math>;
* equality of the original variables is enforced; in other words, two elements of <math>G</math> sharing a variable must be connected by a path in the new problem, and this path enforces equality of that variable; equivalently, all elements in this path contain the shared variable.
The above are sufficient conditions for building a new problem representing the original one. A tree decomposition is a translation building a problem whose graph is a tree.
Formally, a tree decomposition is a translation in which every element of a cover <math>G</math> of variables is assigned a node of a tree, every constraint is assigned at least a node of that tree, and if <math>X_i</math> and <math>X_j</math> share a variable, the path between them is composed of nodes that all contain that variable.
The main feature of tree-decomposing a problem is that efficient algorithms for solving problems whose graph is a tree exist. However, the problem obtained by tree decomposition has size that depends on the size of the sets <math>X_i</math>. Indeed, the ___domain of <math>X_i</math> in the new problem is the set of all tuples satisfying all constraints associated to <math>X_i</math>.
The maximal number of variables of the cover used by a tree decomposition is a measure of the efficiency of solving the translated problem. The minimal such measure over all possible tree decompositions of a problem is a parameter called the ''treewidth'' of that problem.
===Solving from a tree decomposition===
A constraint satisfaction problem can be solved using one of its tree decomposition. An algorithm doing this proceeds creating constraints that are passed along the edges of the tree, from the leaves to the root and back.
These constraint represent the "interface" of a node towards another node. The rationale is that two nodes associated to variables <math>X_i</math> and <math>X_j</math> represent two constraints sharing variables <math>X_i \cap X_j</math>. The set of tuples over <math>X_i \cap X_j</math> that can be extended to <math>X_i</math> in such a way the first constraint is satisfied can be seen as a "summary" of how the constraint affects the shared variables.
The algorithm proceeds from the leaves of the tree. In each node, the summaries of its children (if any) are collected. These summary and the constraint of the node are used to generate the summary of the node for its parent. When the root is reached, the process is inverted: the summary of each node for each child is generated and sent it. When all leaves are reached, the algorithm stops.
===Reformulation===
Join-tree clustering is a particular form of tree decomposition in which:
▲In turn, chordality can be checked using a [[max-cardinality ordering]] of the variables. Such an ordering can also be used, if the two conditions above are met, for finding a join tree of the problem. Ordering constraints by their highest variable according to the ordering, an algorithm for producing a join tree proceeds from the last to the first constraint; at each step, a constraint is connected to the constraint that shares a maximal number of variables with it among the constraints that preceedprecede it in the ordering.
* the elements of the cover are the cliques of the graph resulting from enforcing chordality;
* the tree is the join tree;
* every constraint is assigned to all nodes of the tree whose sets of variables contain the scope of the constraint.
==Extensions==
[[Bucket elimination]] can also be reformulated as an algorithm working on a particular tree decomposition.
▲Not all constraint satisfaction problems have a join tree. However, problems can be modified to acquire a join tree. ''[[Join-tree clustering '']] is a specific method , basedto onmodify mergingproblems constraints,in tosuch makea problemway havingthey acquire a joint tree. MergingThis is done by merging constraints , which typically increases the size of the problem ; however, but solving athe resulting problem thatis haseasy, aas join treeit is easyfor forall problems that have a join tree.
[[Decomposition method (constraint satisfaction)|Decomposition method]]s generalize join-tree clustering by grouping variables in such a way the resulting problem has a join tree. Decomposition methods directly associate a tree with problems; the nodes of this tree are associated variables and/or constraints of the original problem. By merging constraints based on this tree, one can produce a problem that has a join tree, and this join tree can be easily derived from the decomposition tree. Alternatively, one can build a binary acyclic problem directly from the decomposition tree.
*[[Tree decomposition]] and treewidth of a graph
==ReferenceReferences==
*{{Bookcite referencebook
| Firstfirst=Rina
| Lastlast=Dechter
| Titletitle=Constraint Processing
| Publisherpublisher=Morgan Kaufmann
| Yearyear=2003
| URLurl=http://www.ics.uci.edu/~dechter/books/index.html
}} {{ISBN |978-1-55860-890-70}}
*{{cite book
| Coauthorsauthor2=M. Fellows ▼
| Titletitle=Parameterized complexity ▼
| Publisherpublisher=Springer ▼
| URLurl= httphttps://www.springer.com/sgw/cda/frontpage/0,11855,5-0-22-1519914-0,00.html?referer=www.springer.de%2Fcgi-bin%2Fsearch_book.pl%3Fisbn%3D0-387-94883-X ▼
}} {{ISBN |978-0-387-94883- X6}}▼
*{{cite conference
|author1=Georg Gottlob |author2=Nicola Leone |author3=Francesco Scarcello | title=Hypertree Decompositions: A Survey
| book-title=MFCS 2001
| pages=37–57
| year=2001
| url=http://www.springerlink.com/(rqc54x55rqwetq55eco03ymp)/app/home/contribution.asp?referrer=parent&backto=issue,5,61;journal,1765,3346;linkingpublicationresults,1:105633,1
}}{{dead link|date=January 2025|bot=medic}}{{cbignore|bot=medic}}
*{{Book reference
* [[Hidden transformation]]
▲| Title=Parameterized complexity
▲| URL=http://www.springer.com/sgw/cda/frontpage/0,11855,5-0-22-1519914-0,00.html?referer=www.springer.de%2Fcgi-bin%2Fsearch_book.pl%3Fisbn%3D0-387-94883-X
[[Category:Constraint programming|satisfaction dual problem]]
|