Constraint learning: Difference between revisions

Content deleted Content added
expanded definition some more, said that algo looks for smallest inco solution, but this may cost time
Graph-based learning
Line 10:
 
The efficiency of constraint learning algorithm is balanced between two factors. On one hand, the more often a recorded constraint is violated, the more often block backtracking from doing useless search. As a result, algorithms search for small inconsistent subset of the current partial solution, as they correspond to constraints that are easier to violate. On the other hand, finding a small inconsistent subset of the current partial evaluation may require time, and the benefit may not be balanced by the subsequent reduction of the search time.
 
Various constraint learning technique exist, differing in strictness of recorded constraints and cost of finding them.
 
==Graph-based learning==
 
Graph-based learning uses the same rationale of graph-based [[backjumping]]: if the algorithm proves all values of <math>x_{k+1}</math> to be inconsistent with <math>x_1,\ldots,x_k</math>, then <math>x_1,\ldots,x_k</math> was consistent (otherwise the algorithm would not had evaluated <math>x_{k+1}</math>), but each value of <math>x_{k+1}</math> violates some constraints. A small inconsistent partial evaluation can be found by restricting <math>x_1,\ldots,x_k</math> to variables that are in a constraint with <math>x_{k+1}</math>, if this constraint contains no unassigned variable.
 
This method is called "graph-based" because the information on which pairs of variables are in the same constraint can be found out from the graph associated to the constraint satisfaction problem.
 
==See also==