Constraint satisfaction dual problem: Difference between revisions

Content deleted Content added
Link suggestions feature: 2 links added.
Tags: Visual edit Mobile edit Mobile web edit Newcomer task Suggested: add links
 
(21 intermediate revisions by 16 users not shown)
Line 1:
The '''dual problem''' is a reformulation of a [[constraint satisfaction problem]] expressing each constraint of the original problem as a variable. Dual problems only contain [[binary constraintsconstraint]]s, and are therefore solvable by [[algorithm]]s tailored for such problems. The '''join graphs''' and '''join trees''' of a constraint satisfaction problem are [[Graph (graph theory)|graph]]s representing its dual problem or a problem obtained from the dual problem removing some redundant constraints. A '''tree decomposition''' of a constraint satisfaction problem is a translation of a kind that generate problems whose representing graphs are trees.
 
==The dual problem==
 
The dual problem of a constraint satisfaction problem contains a variable for each constraint of the original problem. Its domains and constraints are buildbuilt so to enforce a sort of equivalence to the original problem. In particular, the ___domain of a variable of the dual problem contains one element for each [[tuple]] satisfying the corresponding original constraint. This way, a dual variable can take a value if and only if the corresponding original constraint is satisfied by the corresponding tuple.
 
The constraints of the dual problem forbid two dual variables to take values that correspond to two incompatible tuples. Without these constraints, one dual variable may take the value corresponding to the tuple <math>x=1,y=2</math> while another dual variable takes the value corresponding to <math>y=3,z=1</math>, which assigns a different value to <math>y</math>.
Line 27:
In the dual problem, all constraints are binary. They all enforce two values, which are tuples, to agree on one or more original variables.
 
The ''dual graph'' is a representation of how variables are constrained in the dual problem. More precisely, the [[dual graph]] contains a node for each dual variable and an edge for every constraint between them. In addition, the edge between two variables is labeled by the original variables that are enforced equal between these two dual variables.
 
The dual graph can be built directly from the original problem: it contains a vertex for each constraint, and an edge between every two constraints sharing variables; such an edge is labeled by these shared variables.
Line 47:
A graph obtained from the dual graph by removing some redundant edges is called a ''join graph''. If it is a tree, it is called a ''join tree''. The dual problem can be solved from a join graph since all removed edges are redundant. In turn, the problem can be solved efficiently if that join graph is a tree, using algorithms tailored for acyclic constraint satisfaction problems.
 
Finding a join tree, if any, can be done exploiting the following property: if a dual graph has a join tree, then the maximal-weight [[spanning tree (mathematics)|spanning tree]]s of the graph are all join trees, if edges are weighted by the number of variables the corresponding constraints enforce to be equal. An algorithm for finding a join tree, if any, proceeds as follows. In the first step, edges are assigned weights: if two nodes represent constraints that share <math>n</math> variables, the edge joining them is assigned weight <math>n</math>. In the second step, a maximal-weight spanning tree is searched for. Once one is found, it is checked whether it enforces the required equality of variables. If this is the case, this spanning tree is a join tree.
 
Another method for finding out whether a constraint satisfaction problem has a join tree uses the primal graph of the problem, rather than the dual graph. The ''primal graph'' of a constraint satisfaction problem is a graph whose nodes are problem variables and whose edges represent the presence of two variables in the same constraint. A join tree for the problem exists if:
 
# the primal graph is [[Chordal graph|chordal]];
# the variables of every [[maximal clique]] of the primal graph are the scope of a constraint and vice versa; this property is called ''conformality''.
 
In turn, chordality can be checked using a [[max-cardinality ordering]] of the variables. Such an ordering can also be used, if the two conditions above are met, for finding a join tree of the problem. Ordering constraints by their highest variable according to the ordering, an algorithm for producing a join tree proceeds from the last to the first constraint; at each step, a constraint is connected to the constraint that shares a maximal number of variables with it among the constraints that precede it in the ordering.
 
==Extensions==
==Join-tree clustering==
 
Not all constraint satisfaction problems have a join tree. However, problems can be modified to acquire a join tree. ''[[Join-tree clustering'']] is a specific method, basedto onmodify mergingproblems constraints,in tosuch makea problemway havingthey acquire a joint tree. MergingThis is done by merging constraints, which typically increases the size of the problem; however, but solving athe resulting problem thatis haseasy, aas join treeit is easyfor forall problems that have a join tree.
 
[[Decomposition method (constraint satisfaction)|Decomposition method]]s generalize join-tree clustering by grouping variables in such a way the resulting problem has a join tree. Decomposition methods directly associate a tree with problems; the nodes of this tree are associated variables and/or constraints of the original problem. By merging constraints based on this tree, one can produce a problem that has a join tree, and this join tree can be easily derived from the decomposition tree. Alternatively, one can build a binary acyclic problem directly from the decomposition tree.
Since a join tree exists for a problem if and only if the primal graph of the problem is [[Chordal graph|chordal]] and it is conformat with the problem, the existence of a join tree can be obtained by modifying the problem in such a way it satisfies these two conditions. This is what join-tree clustering does. Chordality is enforced by adding new edges in the primal graph. Conformality is obtained by replacing all constraints with new constraints.
 
In particular, chordality is enforced by adding some "fake" binary constraints to the problem. These are binary constraints satisfied by any pair of values, and are used only to add edges to the primal graph of the problem. In particular, chordality is obtained by adding edges producing the [[Ordered graph|induced graph]] of the primal graph, using an arbitrary ordering. Indeed, the induced graph is always chordal and is obtained adding edges to the original graph.
 
Conformality requires that the maximal cliques of the primal graph are exactly the scope of the constraints. While the scope of every original constraint is clique on the primal graph, this clique is not necessarily maximal. Moreover, even if it initially was maximal, enforcing chordality may create a larger clique.
 
Conformality is enforced by replacing all initial constraints with new ones. In particular, for every maximal clique of the graph resulting from enforcing chordality, a new constraint with the clique as scope is created. The satisfying values of this new constraint are the ones satisfying all original constraints whose scope is contained in the clique.
 
{| cellpadding=20 style="border: thin gray solid;"
|-
| [[Image:Join-tree-clustering-1.svg]]
| [[Image:Join-tree-clustering-2.svg]]
| [[Image:Join-tree-clustering-3.svg]]
|-
| An example: a binary constraint satisfaction problem (join-tree clustering can also be applied to non-binary constraints.) This graph is not chordal (x3x4x5x6 form a cycle of four nodes having no chord).
| The graph is made chordal. The algorithm analyzes the nodes from x6 to x1. The red edge is the only edge added because x6 is the only node having two non-joined parents. It represents a constraint between x4 and x5 that is satisfied by every pair of values.
| The maximal cliques of the resulting graph are identified. Join-tree clustering replaces the constraints on {x3, x4, x5, x6} with two equivalent constraints, one on {x3, x4, x5} and one on {x4, x5, x6}.
|}
 
By this transformation, every original constraint is "included" in at least one new constraint. Indeed, the scope of every original constraint is a clique of the primal graph. This clique remains a clique even after enforcing chordality, as this process only introduces new edges. As a result, this clique either is maximal or is contained in a maximal clique.
 
This translation requires the maximal cliques of a chordal graph to be identified. However, this can bone easily using the same ordering used for enforcing chordality. Since the parents of each node are all connected to each other, the maximal cliques are composed of a node (the maximal node of the clique in a max-cardinality ordering) and all its parents. As a result, these maximal cliques can be detected by considering each node with its parents.
 
The problem that results from this process has a join tree. A join tree can be obtained by using the same ordering of variables again. Proceeding from the last node to the first, every constraint is connected with the preceding constraint that shares more variables with it.
 
==Tree decomposition==
 
A ''tree decomposition'' of a constraint satisfaction problem is a translation of it into a binary problem whose primal graph (the graph in which edges represent constraints) is a tree. Tree decomposition can be seen as an extension of the translation from a problem to its dual. In a dual problem, every variable represents a constraint of the original problem. This fact can be reformulated in terms of variables: each variable of the dual problem represents a set of variables of the original problem that is the scope of a constraint. Tree decomposition extends this idea by allowing sets of variables that are not the scope of any constraint.
 
A tree decomposition of a problem is based on a cover of the set of the original variables. If <math>X</math> is the set of the original variables, and <math>G=\{X_1,\ldots,X_m\}</math> is one of its covers, each <math>X_i</math> will be a variable of the new problem. Since this is not necessarily a partition, a variable of the original variable may occur in more than one <math>X_i</math>. A new problem on these variable can represent the old one if:
 
* <math>G</math> is a cover of <math>X</math>, that is, <math>\cup G=X</math>; it is not required that <math>G</math> is a partition; on the contrary, most partitions do not satisfy the conditions below;
* every original constraint can be "enforced" somewhere in the new problem; for this to be possible, the scope of every constraint must be contained in at least one element of <math>G</math>;
* all "copies" of the original variables are constrainted to be equal; in other words, two elements of <math>G</math> sharing a variable must be connected by a path in the new problem, and this path enforces equality of that variable; equivalently, all elements in this path contain the shared variable.
 
The above are sufficient conditions for building a problem, similar to the dual problem, that represents the original one. This problem has <math>G</math> as variables, and all constraints are binary and enforce equality of the original variables. If the primal graph of this problem is a tree, this translation is called a ''tree decomposition''.
 
Formally, a tree decomposition is a translation in which every element of a cover <math>G</math> of variables is assigned a node of a tree, every constraint is assigned at least a node of that tree, and if <math>X_i</math> and <math>X_j</math> share a variable, the path between them is composed of nodes that all contain that variable. The latter condition is necessary to guarantee equality on all copies of the original variable.
 
The main feature of tree-decomposing a problem is that efficient algorithms for solving problems whose graph is a tree exist. However, the problem obtained by tree decomposition has size that depends on the size of the sets <math>X_i</math>. Indeed, the ___domain of <math>X_i</math> in the new problem is the set of all tuples satisfying all constraints associated to <math>X_i</math>.
 
The maximal number of variables of the cover used by a tree decomposition is a measure of the efficiency of solving the translated problem. The minimal such measure over all possible tree decompositions of a problem is a parameter called the ''treewidth'' of that problem.
 
===Solving from a tree decomposition===
 
A constraint satisfaction problem can be solved using one of its tree decomposition. An algorithm doing this proceeds creating constraints that are passed along the edges of the tree, from the leaves to the root and back.
 
[[Image:Passing-constraint.svg|thumb|220px|The constraint passed from node i to node j summarizes the effects of the nodes on the "side" of i to the variables of j.]]
In a tree, every edge breaks the graph in two parts. The constraint passed along an edge tells how the part of the originating end of the edge affects the variables of the destination node. In other words, a constraint passed from node <math>i</math> to node <math>j</math> tells how the nodes "on the side of <math>i</math>" constrain the variables of node <math>j</math>.
 
If the variables of these two nodes are <math>X_i</math> and <math>X_j</math>, the nodes on the size of <math>i</math> do not affect all variables <math>X_j</math> but only the shared variables <math>X_i \cap X_j</math>. As a result, the influence on <math>X_j</math> of the nodes on the side of <math>i</math> can be represented as a constraint on variables <math>X_i \cap X_j</math>. Such a constraint can be seen as a "summary" of how a set of nodes affects another node.
 
The algorithm proceeds from the leaves of the tree. In each node, the summaries of its children (if any) are collected. These summary and the constraint of the node are used to generate the summary of the node for its parent. When the root is reached, the process is inverted: the summary of each node for each child is generated and sent it. When all leaves are reached, the algorithm stops.
 
{| cellpadding=10 style="border: gray thin solid;"
|-
| [[Image:Solving-tree-decomposition-1.svg]]
| [[Image:Solving-tree-decomposition-2.svg]]
| [[Image:Solving-tree-decomposition-3.svg]]
| [[Image:Solving-tree-decomposition-4.svg]]
|-
| A tree decomposition with associated constraints. All variables have ___domain {0,..,10} in this example.
| The leftmost node contains the constraint a&lt;b and shares the variable b with its parent. These constraint allow b only to take values greater than or equal to 1. This constraint b&gt;0 is sent to its parent.
| The left child of the root receives the constraint b>0 and combines it with its constraint b<c. It shares c with its parent, and the two constraints enforce c>1. This constraint is sent to its parent.
| When the root has received constraints for all its children, it combines them and sends constraints back to them. The process ends when all leaves are reached. At this point, the allowed values of variables are explicit.
|}
 
The set of variables shared between two nodes is called their ''separator''. Since the separator is the intersection between two sets associated with nodes, its size cannot be larger than the induced width of the graph.
 
While the width of the graph affects the time required for solving the subproblems in each node, the size of the separator affects the size of the constraints that are passed between nodes. Indeed, these constraints have the separators as scope. As a result, a constraint over a separator of size <math>n</math> may require size <math>d^n</math> to be stored, if all variables have ___domain of size <math>d</math>.
 
 
 
===Separable components===
 
In the special case of maximal separator size 1, the tree decomposition of a problem corresponds to breaking the graph of the problem in its separable components. A [[separation node]] is a node whose removal from the graph disconnects two nodes that were otherwise connected by some paths. In other words, a separation node is a point crossed by all ''all'' paths between an arbitrary pair of nodes.
 
A graph with no separable node is called nonseparable. There exists an algorithm that is linear in the number of edges that identifies the components of a graph that are nonseparable. Two such components share at most a node. Furthermore, viewing each component has a node and the presence of a shared variable as an edge, the nonseparable components of a graph form a tree. This tree allows for a tree decomposition where all separators have size 1. The algorithm passing constraints between nodes therefore only takes linear space, as all passed constraints have only one variable.
 
===Reformulation===
 
[[Bucket elimination]] can be reformulated as an algorithm working on a particular tree decomposition. In particular, given an ordering of the variables, every variable is associated a bucket containing all constraints such that the variable is the greatest in their scope. Bucket elimination corresponds to the tree decomposition that has a node for each bucket. This node is associated all of its constraints, and corresponds to the set of all variables of these constraints. The parent of a node associated to the bucket of <math>x_i</math> is the node associated to the bucket of <math>x_j</math>, where <math>x_j</math> is the greatest node that is in a constraint with <math>x_i</math> and precedes <math>x_i</math> in the ordering.
 
==See also==
 
*[[Tree decomposition]] and treewidth of a graph
*[[Decomposition method]]
 
==References==
Line 157 ⟶ 71:
| year=2003
| url=http://www.ics.uci.edu/~dechter/books/index.html
}} {{ISBN |978-1-55860-890-70}}
*{{cite book
| first=Rod
| last=Downey
| coauthorsauthor2=M. Fellows
| title=Parameterized complexity
| publisher=Springer
| year=1997
| url=httphttps://www.springer.com/sgw/cda/frontpage/0,11855,5-0-22-1519914-0,00.html?referer=www.springer.de%2Fcgi-bin%2Fsearch_book.pl%3Fisbn%3D0-387-94883-X
}} {{ISBN |978-0-387-94883-X6}}
*{{cite conference
| authorauthor1=Georg Gottlob, |author2=Nicola Leone, |author3=Francesco Scarcello | title=Hypertree Decompositions: A Survey
| booktitlebook-title=MFCS 2001
| title=Hypertree Decompositions: A Survey
| pages=37-5737–57
| booktitle=MFCS 2001
| pages=37-57
| year=2001
| url=http://www.springerlink.com/(rqc54x55rqwetq55eco03ymp)/app/home/contribution.asp?referrer=parent&backto=issue,5,61;journal,1765,3346;linkingpublicationresults,1:105633,1
}}{{dead link|date=January 2025|bot=medic}}{{cbignore|bot=medic}}
}}
 
== See also ==
* [[Hidden transformation]]
 
[[Category:Constraint programming|satisfaction dual problem]]