Content deleted Content added
m →Definition: gr |
→Meta-techniques: clarified why the problem is and that the two methods are complementary |
||
Line 38:
The ordering on constraints is based on the order of assignment of variable. In particular, the least of two constraint is the one whose latest non-common variable has been instantiated first. When an inconsistent assignment is reached, jumpback learning selects the violated constraint that is minimal according to this ordering, and restricts the current assignment to its variables. The constraint expressing the inconsistency of this assignment is stored.
==Constraint maintenance==
Constraint learning algorithms differ not only on the choice of constraint corresponding to a given inconsistent partial evaluation, but also on the choice of which constraint they maintain and which ones they discard.
For a given style of learning, some choices about the use of the learning constraints arise. In general, learning all inconsistencies in form of constraints and keeping them indefinitedly may cause memory problems. This problem can be overcome by either not learning all constraints or by discarding them when they are considered not useful any longer. ''Bounded learning'' only stores constraints if the inconsistent partial evaluation they represent is smaller than a given constrant number. ''Relevance-bounded learning'' discards constraints that are considered not relevant; in particular, it discards all constraints that represent inconsistent partial evaluations that differ from the current partial evaluation on no more than a given fixed number of variables.▼
In general, learning all inconsistencies in form of constraints and keeping them indefinitedly may exhaust the available memory and increase the cost of checking consistency of partial evaluations. These problems can be solved by either not storing all learned constraints or by occasionally discarding constraints.
▲
==See also==
|