Content deleted Content added
No edit summary |
|||
(4 intermediate revisions by 3 users not shown) | |||
Line 1:
{{Short description|Logic programming with constraint satisfaction}}
{{Programming paradigms}}▼
'''Constraint logic programming''' is a form of [[constraint programming]], in which [[logic programming]] is extended to include concepts from [[constraint satisfaction]]. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is {{code|2=prolog|A(X,Y) :- X+Y>0, B(X), C(Y)}}. In this clause, {{code|2=prolog|X+Y>0}} is a constraint; <code>A(X,Y)</code>, <code>B(X)</code>, and <code>C(Y)</code> are [[Literal (mathematical logic)|literals]] as in regular logic programming. This clause states one condition under which the statement <code>A(X,Y)</code> holds: <code>X+Y</code> is greater than zero and both <code>B(X)</code> and <code>C(Y)</code> are true.
Line 38 ⟶ 37:
* {{mvar|S}} and <math>S'</math> are equivalent according to the specific constraint semantics
Actual interpreters process the goal elements in a [[LIFO (computing)|LIFO]] order: elements are added in the front and processed from the front. They also choose the clause of the second rule according to the order in which they are written, and rewrite the constraint store when it is modified.
Line 86 ⟶ 85:
However, the constraint store may also contain constraints in the form <code>t1!=t2</code>, if the difference <code>!=</code> between terms is allowed. When constraints over reals or finite domains are allowed, the constraint store may also contain ___domain-specific constraints like <code>X+2=Y/2</code>, etc.
The constraint store extends the concept of current substitution in two ways. First, it
Domain-specific constraints may come to the constraint store both from the body of a clauses and from equating a literal with a clause head: for example, if the interpreter rewrites the literal <code>A(X+2)</code> with a clause whose fresh variant head is <code>A(Y/2)</code>, the constraint <code>X+2=Y/2</code> is added to the constraint store. If a variable appears in a real or finite ___domain expression, it can only take a value in the reals or the finite ___domain. Such a variable cannot take a term made of a functor applied to other terms as a value. The constraint store is unsatisfiable if a variable is bound to take both a value of the specific ___domain and a functor applied to terms.
Line 139 ⟶ 138:
The bottom-up evaluation strategy maintains the set of facts proved so far during evaluation. This set is initially empty. With each step, new facts are derived by applying a program clause to the existing facts, and are added to the set. For example, the bottom up evaluation of the following program requires two steps:
{{sxhl|2=prolog|1=<nowiki/>
A(q).
B(X):-A(X).
}}
The set of consequences is initially empty. At the first step, <code>A(q)</code> is the only clause whose body can be proved (because it is empty), and <code>A(q)</code> is therefore added to the current set of consequences. At the second step, since <code>A(q)</code> is proved, the second clause can be used and <code>B(q)</code> is added to the consequences. Since no other consequence can be proved from <code>{A(q),B(q)}</code>, execution terminates.
The advantage of the bottom-up evaluation over the top-down one is that cycles of derivations do not produce an [[infinite loop]]. This is because adding a consequence to the current set of consequences that already contains it has no effect. As an example, adding a third clause to the above program generates a cycle of derivations in the top-down evaluation:
{{sxhl|2=prolog|1=<nowiki/>
A(q).
B(X):-A(X).
A(X):-B(X).
}}
For example, while evaluating all answers to the goal <code>A(X)</code>, the top-down strategy would produce the following derivations:
{{sxhl|2=prolog|1=<nowiki/>
A(q)
A(q):-B(q), B(q):-A(q), A(q)
A(q):-B(q), B(q):-A(q), A(q):-B(q), B(q):-A(q), A(q)
}}
In other words, the only consequence <code>A(q)</code> is produced first, but then the algorithm cycles over derivations that do not produce any other answer. More generally, the top-down evaluation strategy may cycle over possible derivations, possibly when other ones exist.
The bottom-up strategy does not have the same drawback, as consequences that were already derived has no effect. On the above program, the bottom-up strategy starts adding <code>A(q)</code> to the set of consequences; in the second step, <code>B(X):-A(X)</code> is used to derive <code>B(q)</code>; in the third step, the only facts that can be derived from the current consequences are <code>A(q)</code> and <code>B(q)</code>, which are however already in the set of consequences. As a result, the algorithm stops.
In the above example, the only used facts were ground literals. In general, every clause that only contains constraints in the body is considered a fact. For example, a clause
As described, the bottom-up approach has the advantage of not considering consequences that have already been derived. However, it still may derive consequences that are entailed by those already derived while not being equal to any of them. As an example, the bottom up evaluation of the following program is infinite:
Line 261 ⟶ 260:
==References==
{{reflist}}
▲{{Programming paradigms navbox}}
{{DEFAULTSORT:Constraint Logic Programming}}
|