Constraint learning: Difference between revisions

Content deleted Content added
Jumpback learning: which assignment is used
{{1r}}
 
(22 intermediate revisions by 15 users not shown)
Line 1:
{{mi|{{Nf|date=November 2024}}{{1r|date=November 2024}}}}
In [[constraint satisfaction problem|constraint satisfaction]] [[backtracking]] [[algorithm]]s, '''constraint learning''' is a techinquetechnique for imporovingimproving efficiency. It works by recording new constraints whenever an inconsistency is found. This new constraint may reduce the [[Candidate solution|search space]], as future partial evaluations may be found inconsistent without further search. '''Clause learning''' is the name of this technique when applied to [[propositional satisfiability]].
 
==Definition==
 
Backtracking algorithms work by choosing an unassingedunassigned variable and [[recursively]] solve the problems obtained by assigning a value to this variable. Whenever the current partial solution is found inconsistent, the algorithm goes back to the previously assigned variable, as expected by recursion. A constraint learning algorithm differs because it tries to record some information, before backtracking, in the form of a new constraint. This can reduce the further search because the subsequent search may encounter another partial solution that is inconsistent with this new constraint. If the algorithm has learned the new constraint, it will backtrack from this solution, while the original backtracking algorithm would do a subsequent search.
 
If the partial solution <math>x_1=a_1,\ldots,x_k=a_k</math> is inconsistent, the problem instance implies the constraint stating that <math>x_i=a_i</math> cannot be be true for all <math>i \in [1,k]</math> at the same time. However, recording this constraint is not useful, as this partial solution will not be encountered again due to the way backtracking proceedproceeds.
 
On the other hand, if a subset of this evaluation is inconsistent, the corresponding constraint may be useful in the subsequent search, as the same subset of the partial evaluation may occur again in the search. For example, the algorithm may enounterencounter an evaluation extending the subset <math>x_2=a_2, x_5=a_5, x_{k-1}=a_{k-1}</math> of the previous partial evaluation. If this subset is inconsistent and the algorithm has stored this fact in the form of a constraint, no further search is needed to conclude that the new partial evaluation cannot be extended to form a solution.
 
{| cellpadding=20
Line 20 ⟶ 21:
|}
 
==Efficiency of constraint learning==
The efficiency of constraint learning algorithm is balanced between two factors. On one hand, the more often a recorded constraint is violated, the more often block backtracking from doing useless search. As a result, algorithms search for small inconsistent subset of the current partial solution, as they correspond to constraints that are easier to violate. On the other hand, finding a small inconsistent subset of the current partial evaluation may require time, and the benefit may not be balanced by the subsequent reduction of the search time.
 
The efficiency gain of constraint learning algorithm is balanced between two factors. On one hand, the more often a recorded constraint is violated, the more often block backtracking fromavoids doing useless searchsearching. AsSmall ainconsistent result,subsets algorithmsof searchthe forcurrent smallpartial inconsistentsolution subsetare ofusually thebetter currentthan partiallarge solutionones, as they correspond to constraints that are easier to violate. On the other hand, finding a small inconsistent subset of the current partial evaluation may require time, and the benefit may not be balanced by the subsequent reduction of the search time.
Various constraint learning technique exist, differing in strictness of recorded constraints and cost of finding them.
 
Size is however not the only feature of learned constraints to take into account. Indeed, a small constraint may be useless in a particular state of the search space because the values that violate it will not be encountered again. A larger constraint whose violating values are more similar to the current partial assignment may be preferred in such cases.
 
Various constraint learning techniquetechniques exist, differing in strictness of recorded constraints and cost of finding them.
 
==Graph-based learning==
Line 30 ⟶ 35:
As a result, an inconsistent evaluation is the restriction of the truth evaluation of <math>x_1,\ldots,x_k</math> to variables that are in a constraint with <math>x_{k+1}</math>, provided that this constraint contains no unassigned variable.
 
Learning constraints representing these partial evaluation is called graph-based learning. It uses the same rationale of [[graph-based backjumping]]. These methods are called "graph-based" because they are based on pairs of variables are in the same constraint, which can be found out from the graph associated to the constraint satisfaction problem.
 
==Jumpback learning==
 
Jumpback learning is based on storing as constraints the inconsistent assignments that would be found by [[conflict-based backjumping]]. Whenever a partial assignment is found inconsistent, this algorithm selects the violated constraint that is minimal according to an ordering based on the order of instantiation of variblesvariables. The evaluation restricted ofto the variables that are in this constraint is inconsistent and is usually shorter than the complete evaluation. Jumpback learning stores this fact as a new constraint.
 
The ordering on constraints is based on the order of assignment of variablevariables. In particular, the least of two constraintconstraints is the one whose latest non-common variable has been instantiated first. When an inconsistent assignment is reached, jumpback learning selects the violated constraint that is minimal according to this ordering, and restricts the current assignment to its variables. The constraint expressing the inconsistency of this assignment is stored.
 
==Constraint maintenance==
 
Constraint learning algorithms differ not only on the choice of constraint corresponding to a given inconsistent partial evaluation, but also on the choice of which constraints they retain and which ones they discard.
The ordering on constraints is based on the order of assignment of variable. In particular, the least of two constraint is the one whose latest non-common variable has been instantiated first. When an inconsistent assignment is reached, jumpback learning selects the violated constraint that is minimal according to this ordering, and restricts the current assignment to its variables. The constraint expressing the inconsistency of this assignment is stored.
 
In general, learning all inconsistencies in the form of constraints and keeping them indefinitely may exhaust the available memory and increase the cost of checking consistency of partial evaluations. These problems can be solved either by storing only some learned constraints or by occasionally discarding constraints.
==Meta-techniques==
 
For a given style of learning, some choices about the use of the learning constraints arise. In general, learning all inconsistencies in form of constraints and keeping them indefinitedly may cause memory problems. This problem can be overcome by either not learning all constraints or by discarding them when they are considered not useful any longer. ''Bounded learning'' only stores constraints if the inconsistent partial evaluation they represent is smaller than a given constrantconstraint number. ''Relevance-bounded learning'' discards constraints (or does not store them at all) that are considered not relevant given the current point of the search space; in particular, it discards or does not store all constraints that represent inconsistent partial evaluations that differ from the current partial evaluation on no more than a given fixed number of variables.
 
==See also==
Line 47 ⟶ 56:
*[[Backjumping]]
 
==ReferenceReferences==
 
*{{Bookcite referencebook
| Firstfirst=Rina
| Lastlast=Dechter
|authorlink = Rina Dechter
| Titletitle=Constraint Processing
| Publisherpublisher=Morgan Kaufmann
| Yearyear=2003
| URLurl=http://www.ics.uci.edu/~dechter/books/index.html
}} {{ISBN |1-55860-890-7}}
 
[[Category:ComputerConstraint scienceprogramming]]