Constraint learning: Difference between revisions

Content deleted Content added
Definition: when -> if + if reached *again*
modified Graph-based learning, added jumpback learning and meta-techniques
Line 26:
==Graph-based learning==
 
Graph-based learning uses the same rationale of graph-based [[backjumping]]: ifIf the algorithm proves all values of <math>x_{k+1}</math> to be inconsistent with <math>x_1=a_1,\ldots,x_k=a_k</math>, then this evaluation was consistent, as otherwise the algorithm would not have evaluated <math>x_{k+1}</math> at all; as a result, the constraints violated by a value of <math>x_{k+1}</math> together with <math>x_1=a_1,\ldots,x_k=a_k</math> all contain <math>x_{k+1}</math>.
 
As a result, a smallan inconsistent partial evaluation canis bethe foundrestriction byof the truth evaluation restrictingof <math>x_1,\ldots,x_k</math> to variables that are in a constraint with <math>x_{k+1}</math>, provided that this constraint contains no unassigned variable.
 
ThisLearning methodconstraints representing these partial evaluation is called graph-based learning. It uses the same rationale of [[graph-based backjumping]]. These methods are called "graph-based" because thethey informationare based on which pairs of variables are in the same constraint, which can be found out from the graph associated to the constraint satisfaction problem.
 
==Jumpback learning==
 
Jumpback learning is based on storing as constraint the inconsistent assignments that would be found by [[conflict-based backjumping]]. Whenever a partial assignment is found inconsistent, this algorithm selects the violated constraint that is minimal according to an ordering based on the order of instantiation of varibles. The evaluation restricted of the variables that are in this constraint is inconsistent and is usually shorter than the complete evaluation. Jumpback learning stores this fact as a new constraint.
 
==Meta-techniques==
 
For a given style of learning, some choices about the use of the learning constraints arise. In general, learning all inconsistencies in form of constraints and keeping them indefinitedly may cause memory problems. This problem can be overcome by either not learning all constraints or by discarding them when they are considered not useful any longer. ''Bounded learning'' only stores constraints if the inconsistent partial evaluation they represent is smaller than a given constrant number. ''Relevance-bounded learning'' discards constraints that are considered not relevant; in particular, it discards all constraints that represent inconsistent partial evaluations that differ from the current partial evaluation on no more than a given fixed number of variables.
 
==See also==