Content deleted Content added
{{1r}} |
|||
(12 intermediate revisions by 11 users not shown) | |||
Line 1:
{{mi|{{Nf|date=November 2024}}{{1r|date=November 2024}}}}
In [[constraint satisfaction problem|constraint satisfaction]] [[backtracking]] [[algorithm]]s, '''constraint learning''' is a
==Definition==
Backtracking algorithms work by choosing an
If the partial solution <math>x_1=a_1,\ldots,x_k=a_k</math> is inconsistent, the problem instance implies the constraint stating that <math>x_i=a_i</math> cannot be true for all <math>i \in [1,k]</math> at the same time. However, recording this constraint is not useful, as this partial solution will not be encountered again due to the way backtracking
On the other hand, if a subset of this evaluation is inconsistent, the corresponding constraint may be useful in the subsequent search, as the same subset of the partial evaluation may occur again in the search. For example, the algorithm may encounter an evaluation extending the subset <math>x_2=a_2, x_5=a_5, x_{k-1}=a_{k-1}</math> of the previous partial evaluation. If this subset is inconsistent and the algorithm has stored this fact in the form of a constraint, no further search is needed to conclude that the new partial evaluation cannot be extended to form a solution.
{| cellpadding=20
Line 22 ⟶ 23:
==Efficiency of constraint learning==
The efficiency gain of constraint learning
Size is however not the only feature of learned constraints to take into account. Indeed, a small constraint may be useless in a particular state of the search space because the values that violate it will not be encountered again. A larger constraint whose violating values are more similar to the current partial assignment may be preferred in such cases.
Line 34 ⟶ 35:
As a result, an inconsistent evaluation is the restriction of the truth evaluation of <math>x_1,\ldots,x_k</math> to variables that are in a constraint with <math>x_{k+1}</math>, provided that this constraint contains no unassigned variable.
Learning constraints representing these partial evaluation is called graph-based learning. It uses the same rationale of [[graph-based backjumping]]. These methods are called "graph-based" because they are based on pairs of variables
==Jumpback learning==
Jumpback learning is based on storing as constraints the inconsistent assignments that would be found by [[conflict-based backjumping]]. Whenever a partial assignment is found inconsistent, this algorithm selects the violated constraint that is minimal according to an ordering based on the order of instantiation of
The ordering on constraints is based on the order of assignment of
==Constraint maintenance==
Constraint learning algorithms differ not only on the choice of constraint corresponding to a given inconsistent partial evaluation, but also on the choice of which
In general, learning all inconsistencies in the form of constraints and keeping them
''Bounded learning'' only stores constraints if the inconsistent partial evaluation they represent is smaller than a given
==See also==
Line 55 ⟶ 56:
*[[Backjumping]]
==
*{{cite book
| first=Rina
| last=Dechter
|authorlink = Rina Dechter
| title=Constraint Processing
| publisher=Morgan Kaufmann
| year=2003
| url=http://www.ics.uci.edu/~dechter/books/index.html
}} {{ISBN
[[Category:Constraint
|