Constraint logic programming: Difference between revisions

Content deleted Content added
See also section above References section
m <source lang="prolog">
Line 1:
{{Programming paradigms}}
 
'''Constraint logic programming''' is a form of [[constraint programming]], in which [[logic programming]] is extended to include concepts from [[constraint satisfaction]]. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is <{{code>|2=prolog|A(X,Y) :- X+Y>0, B(X), C(Y)</code>}}. In this clause, <{{code>|2=prolog|X+Y>0</code>}} is a constraint; <code>A(X,Y)</code>, <code>B(X)</code>, and <code>C(Y)</code> are [[Literal (mathematical logic)|literals]] as in regular logic programming. This clause states one condition under which the statement <code>A(X,Y)</code> holds: <code>X+Y</code> is greater than zero and both <code>B(X)</code> and <code>C(Y)</code> are true.
 
As in regular logic programming, programs are queried about the provability of a goal, which may contain constraints in addition to literals. A proof for a goal is composed of clauses whose bodies are satisfiable constraints and literals that can in turn be proved using other clauses. Execution is performed by an interpreter, which starts from the goal and [[Recursion|recursively]] scans the clauses trying to prove the goal. Constraints encountered during this scan are placed in a set called '''constraint store'''. If this set is found out to be unsatisfiable, the interpreter [[Backtracking|backtracks]], trying to use other clauses for proving the goal. In practice, satisfiability of the constraint store may be checked using an incomplete algorithm, which does not always detect inconsistency.
Line 8:
 
Formally, constraint logic programs are like regular logic programs, but the body of clauses can contain constraints, in addition to the regular logic programming literals. As an example, <code>X>0</code> is a constraint, and is included in the last clause of the following constraint logic program.
<source lang="prolog">
 
B(X,1):- X<0.
B(X,Y):- X=1, Y>0.
A(X,Y):- X>0, B(X,Y).
</source>
 
Like in regular logic programming, evaluating a goal such as <code>A(X,1)</code> requires evaluating the body of the last clause with <code>Y=1</code>. Like in regular logic programming, this in turn requires proving the goal <code>B(X,1)</code>. Contrary to regular logic programming, this also requires a constraint to be satisfied: <code>X>0</code>, the constraint in the body of the last clause (in regular logic programming, X>0 cannot be proved unless X is bound to a fully-ground term and execution of the program will fail if that is not the case).
 
Line 33:
Formally, the semantics of constraint logic programming is defined in terms of ''derivations''. A transition is a pair of pairs goal/store, noted <math>\langle G,S \rangle \rightarrow \langle G',S' \rangle</math>. Such a pair states the possibility of going from state <math>\langle G,S \rangle</math> to state <math>\langle G',S' \rangle</math>. Such a transition is possible in three possible cases:
 
* an element of <math>{{mvar|G</math>}} is a constraint <math>{{mvar|C</math>}}, <math>G'=G \backslash \{C\}</math> and <math>S'=S \cup \{C\}</math>; in other words, a constraint can be moved from the goal to the constraint store
* an element of <math>{{mvar|G</math>}} is a literal <math>L(t_1,\ldots,t_n)</math>, there exists a clause that, rewritten using new variables, is <math>L(t_1',\ldots,t_n') :- B</math>, <math>G'</math> is <math>{{mvar|G</math>}} with <math>L(t_1,\ldots,t_n)</math> replaced by <math>t_1=t_1',\ldots,t_n=t_n',B</math>, and <math>S'=S</math>; in other words, a literal can be replaced by the body of a fresh variant of a clause having the same predicate in the head, adding the body of the fresh variant and the above equalities of terms to the goal
* <math>{{mvar|S</math>}} and <math>S'</math> are equivalent according to the specific constraint semantics
 
A sequence of transitions is a derivation. A goal <math>{{mvar|G</math>}} can be proved if there exists a derivation from <math>\langle G, \emptyset \rangle</math> to <math>\langle \emptyset, S \rangle</math> for some satisfiable constraint store <math>{{mvar|S</math>}}. This semantics formalizes the possible evolutions of an interpreter that arbitrarily chooses the literal of the goal to process and the clause to replace literals. In other words, a goal is proved under this semantics if there exists a sequence of choices of literals and clauses, among the possibly many ones, that lead to an empty goal and satisfiable store.
 
Actual interpreters process the goal elements in a [[LIFO (computing)|LIFO]] order: elements are added in the front and processed from the front. They also choose the clause of the second rule according to the order in which they are written, and rewrite the constraint store when it is modified.
Line 111:
==Program reformulations==
 
A given constraint logic program may be reformulated to improve its efficiency. A first rule is that labeling literals should be placed after as much constraints on the labeled literals are accumulated in the constraint store. While in theory <{{code>|2=prolog|A(X):-labeling(X),X>0</code>}} is equivalent to <{{code>|2=prolog|A(X):-X>0,labeling(X)</code>}}, the search that is performed when the interpreter encounters the labeling literal is on a constraint store that does not contain the constraint <code>X>0</code>. As a result, it may generate solutions, such as <code>X=-1</code>, that are later found out not to satisfy this constraint. On the other hand, in the second formulation the search is performed only when the constraint is already in the constraint store. As a result, search only returns solutions that are consistent with it, taking advantage of the fact that additional constraints reduce the search space.
 
A second reformulation that can increase efficiency is to place constraints before literals in the body of clauses. Again, <{{code>|2=prolog|A(X):-B(X),X>0</code>}} and <{{code>|2=prolog|A(X):-X>0,B(X)</code>}} are in principle equivalent. However, the first may require more computation. For example, if the constraint store contains the constraint <code>X<-2</code>, the interpreter recursively evaluates <code>B(X)</code> in the first case; if it succeeds, it then finds out that the constraint store is inconsistent when adding <code>X>0</code>. In the second case, when evaluating that clause, the interpreter first adds <code>X>0</code> to the constraint store and then possibly evaluates <code>B(X)</code>. Since the constraint store after the addition of <code>X>0</code> turns out to be inconsistent, the recursive evaluation of <code>B(X)</code> is not performed at all.
 
A third reformulation that can increase efficiency is the addition of redundant constrains. If the programmer knows (by whatever means) that the solution of a problem satisfies a specific constraint, they can include that constraint to cause inconsistency of the constraint store as soon as possible. For example, if it is known beforehand that the evaluation of <code>B(X)</code> will result in a positive value for <code>X</code>, the programmer may add <code>X>0</code> before any occurrence of <code>B(X)</code>. As an example, <code>A(X,Y):-B(X),C(X)</code> will fail on the goal <code>A(-2,Z)</code>, but this is only found out during the evaluation of the subgoal <code>B(X)</code>. On the other hand, if the above clause is replaced by <{{code>|2=prolog|A(X,Y):-X>0,A(X),B(X)</code>}}, the interpreter backtracks as soon as the constraint <code>X>0</code> is added to the constraint store, which happens before the evaluation of <code>B(X)</code> even starts.
 
==Constraint handling rules==
Line 163:
 
As described, the bottom-up approach has the advantage of not considering consequences that have already been derived. However, it still may derive consequences that are entailed by those already derived while not being equal to any of them. As an example, the bottom up evaluation of the following program is infinite:
<source lang="prolog">
 
A(0).
A(X):-X>0.
A(X):-X=Y+1, A(Y).
</source>
 
The bottom-up evaluation algorithm first derives that <code>A(X)</code> is true for <code>X=0</code> and <code>X>0</code>. In the second step, the first fact with the third clause allows for the derivation of <code>A(1)</code>. In the third step, <code>A(2)</code> is derived, etc. However, these facts are already entailed by the fact that <code>A(X)</code> is true for any nonnegative <code>X</code>. This drawback can be overcome by checking for [[entailment]] facts that are to be added to the current set of consequences. If the new consequence is already entailed by the set, it is not added to it. Since facts are stored as clauses, possibly with "local variables", entailment is restricted over the variables of their heads.