Content deleted Content added
→Definition: when -> if + if reached *again* |
modified Graph-based learning, added jumpback learning and meta-techniques |
||
Line 26:
==Graph-based learning==
As a result,
==Jumpback learning==
Jumpback learning is based on storing as constraint the inconsistent assignments that would be found by [[conflict-based backjumping]]. Whenever a partial assignment is found inconsistent, this algorithm selects the violated constraint that is minimal according to an ordering based on the order of instantiation of varibles. The evaluation restricted of the variables that are in this constraint is inconsistent and is usually shorter than the complete evaluation. Jumpback learning stores this fact as a new constraint.
==Meta-techniques==
For a given style of learning, some choices about the use of the learning constraints arise. In general, learning all inconsistencies in form of constraints and keeping them indefinitedly may cause memory problems. This problem can be overcome by either not learning all constraints or by discarding them when they are considered not useful any longer. ''Bounded learning'' only stores constraints if the inconsistent partial evaluation they represent is smaller than a given constrant number. ''Relevance-bounded learning'' discards constraints that are considered not relevant; in particular, it discards all constraints that represent inconsistent partial evaluations that differ from the current partial evaluation on no more than a given fixed number of variables.
==See also==
|