Backtracking algorithms work by choosing an unassingedunassigned variable and recursively solve the problems obtained by assigning a value to this variable. Whenever the current partial solution is found inconsistent, the algorithm goes back to the previously assigned variable, as expected by recursion. A constraint learning algorithm differs because it tries to record some information, before backtracking, in form of a new constraint. This can reduce the further search because the subsequent search may encounter another partial solution that is inconsistent with this new constraint. If the algorithm has learned the new constraint, it will backtrack from this solution, while the original backtracking algorithm would do a subsequent search.
If the partial solution <math>x_1=a_1,\ldots,x_k=a_k</math> is inconsistent, the problem instance implies the constraint stating that <math>x_i=a_i</math> cannot be true for all <math>i \in [1,k]</math> at the same time. However, recording this constraint is not useful, as this partial solution will not be encountered again due to the way backtracking proceed.