Constraint logic programming

This is an old revision of this page, as edited by Tizio (talk | contribs) at 12:35, 6 March 2006 (made precise that para is the overview and the following are details). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Constraint logic programming is a variant of logic programming that incorporates constraints as used in constraint satisfaction.

A constraint logic programming is a logic program that includes constraints in the body of clauses. A clause can be used to prove the goal if its constraints are satisfied and its literals can be proved. More precisely, the set of constraints of clauses used in a derivation is supposed to be satisfiable in order for the derivation to be valid.

When the interpreter scans the body of a clause, it backtracks if a constraint is not satisfied or a literal cannot be proved. There is a difference in how constrains and literals are handled: literals are proved by recursively evaluating other clauses; constraints are checked by placing them in a set called constraint store that is supposed to be satisafiable. This constraint store contains all constraints assumed satisfiable during execution.

If the constraint store becomes unsatisfiable, the interpreter should backtrack, as the clause it is evaluating contains a constraint that cannot be satisfied. In practice, some form of local consistency is used as an approximation of satisfiability. However, the goal is truly proved only if the constraint store is actually satisfiable.

Formally, constraint logic programs are like regular logic programs, but the body of clause can contain:

  1. logic programming literals (the regular literals of logic programming)
  2. constraints
  3. labeling literals

During evaluation, a pair is maintained. The first element is the current goal; the second element is the constraint store. The current goal contains the literals the interpreter is trying to prove; the constraint store contains all constraints the interpreter has assumed satisfiable so far.

Initially, the current goal is the goal and the constraint store is empty. The algorithm proceed by iteratively removing the first element from the goal and analyzing it. This analysis may result in a failure (backtracking) and may introduce new literals in front of the goal or new constraints in the constraint store.

More precisely, each step of the algorithm is as follows. The first literal of the goal is considered and removed from the current goal. If it is a constraint, it is added to the constraint store. If it is a literal, it is treated as in regular logic programming: a clause whose head has the same top-level predicate as the literal is chosen; its body is placed in front of the current goal; equality between the literal and the head of the clause is added to the constraint store.

Some checks are done during these operations. In particular, the constraint store is checked for consistency every time a new constraint is added to it. In principle, whenever the constraint store is unsatisfiable the algorithm should backtrack. However, checking unsatisfiability at each step would be inefficient. For this reason, a form of local consistency is checked instead.

When the current goal is empty, a regular logic program interpreter would stop and output the current substitution. In the same conditions, a constraint logic program also stops, and its output may be the current domains as reduced via the local consistency conditions on the constraint store. Actual satisfiability and finding a solution is enforced via labeling literals. In particular, whenever the interpreter encounters the literal during the evaluation of a clause, it runs a satisfiability checker on the current constraint store to try and find a satisfying assignment.

Reference

  • Dechter, Rina (2003). Constraint processing. Morgan Kaufmann. ISBN 1-55860-890-7
  • Apt, Krzysztof (2003). Principles of constraint programming. Cambridge University Press. ISBN 0-521-82583-0