Constraint learning: Difference between revisions

Content deleted Content added
Graph-based learning: made clearer (?)
Definition: example with images
Line 8:
 
On the other hand, if a subset of this evaluation is inconsistent, the corresponding constraint may be useful in the subsequent search, as the same subset of the partial evaluation may occur again in the search. For example, the algorithm may enounter an evaluation extending the subset <math>x_2=a_2, x_5=a_5, x_{k-1}=a_{k-1}</math> of the previous partial evaluation. If this subset is inconsistent and the algorithm has stored this fact in form of a constraint, no further search is needed to conclude that the new partial evaluation cannot be extended to form a solution.
 
{| cellpadding=20
|-
| [[Image:Constraint-learning-1.svg]]
| [[Image:Constraint-learning-2.svg]]
| [[Image:Constraint-learning-3.svg]]
|-
| Search has reached a dead end
| Inconsistency may be caused by the values of <math>x_1</math> and <math>x_4</math> only
| If this fact is stored in a constraint, when reaching the same values of <math>x_1</math> and <math>x_4</math>, no further search is needed from this point
|}
 
The efficiency of constraint learning algorithm is balanced between two factors. On one hand, the more often a recorded constraint is violated, the more often block backtracking from doing useless search. As a result, algorithms search for small inconsistent subset of the current partial solution, as they correspond to constraints that are easier to violate. On the other hand, finding a small inconsistent subset of the current partial evaluation may require time, and the benefit may not be balanced by the subsequent reduction of the search time.