Content deleted Content added
→Tree terms: emulates -> reformulates |
Bottom-up evaluation |
||
Line 80:
When the interpreter evaluates the goal <code>solve(args)</code>, it places the rewritten body of the first clause in the current goal. Since the first goal is <code>constraints(X')</code>, the second clause is evaluated, and this operation moves all constraints in the current goal and eventually in the constraint store. The literal <code>labeling(X')</code> is then evaluated, forcing a search for a solution of the constraint store. Since the constraint store contains exactly the constraints of the original constraint satisfaction problem, this operation searches for a solution of the original problem.
==Program reformulations==
A given constraint logic progam may be reformulated to improve its efficiency. A first rule is that labeling literals should be placed after as much constraints on the labeled literals are accumulated in the constraint store. While in theory <code>A(X):-labeling(X),X>0</code> is equivalent to <code>A(X):-X>0,labeling(X)</code>, the search that is performed when the interpreter encounters the labeling literal is on a constraint store that does not contain the constraint <code>X>0</code>. As a result, it may generate solutions, such as <code>X=-1</code>, that are later found out not to satisfy this constraint. On the other hand, in the second formulation the search is performed only when the constraint is already in the constraint store. As a result, search only returns solutions that are consistent with it, taking advantage of the fact that additional constraints reduce the search space.
Line 87:
A third reformulation that can increase efficiency is the addition of redundant constrains. If the programmer knows (by whatever means) that the solution of a problem satisfies a specific constraint, they can include that constraint to cause inconsistency of the constraint store as soon as possible. For example, if it is known beforehands that the evaluation of <code>B(X)</code> will result in a positive value for <code>X</code>, the programmer may add <code>X>0</code> before any occurrence of <code>B(X)</code>. As an example, <code>A(X,Y):-B(X),C(X)</code> will fail on the goal <code>A(-2,Z)</code>, but this is only found out during the evaluation of the subgoal <code>B(X)</code>. On the other hand, if the above clause is replaced by <code>A(X,Y):-X>0,A(X),B(X)</code>, the interpreter backtracks as soon as the constraint <code>X>0</code> is added to the constraint store, which happens before the evalation of <code>B(X)</code> even starts.
==Bottom-up evaluation==
The standard strategy of evaluation of logic programs is [[top-down]] and [[depth-first]]: from the goal, a number of clauses are identified as being possibly able to prove the goal, and recursion over the literals of their bodies is performed. An alternative strategy is to start from the facts and use clauses to derive new facts; this strategy is called [[bottom-up]]. It is considered better than the top-down one when the aim is that of producing all consequences of a given program, rather than proving a single goal. In particular, finding all consequences of a program in the standard top-down and depth-first manner may not terminate while the bottom-up evaluation strategy terminates.
The bottom-up evaluation strategy maintains the set of facts proved so far during evalution. This set is initially empty. At each step, it is added new facts derived from the facts already in the set using a single clause in the program. For example, the bottom up evaluation of the following program requires two steps:
A(q).
B(X):-A(X).
The set of consequences is initially empty. At the first step, <code>A(q)</code> is the only clause whose body can be proved (because it is empty), and <code>A(q)</code> is therefore added to the current set of consequences. At the second step, since <code>A(q)</code> is proved, the second clause can be used and <code>B(q)</code> is added to the consequences. Since no other consequence can be proved from <code>{A(q),B(q)}</code>, execution terminates.
The advantage of the bottom-up evaluation over the top-down one is that cycles of derivations do not produce an [[infinite loop]]. This is because adding a consequence to the current set of consequences that already contains it has no effect. As an example, adding a third clause to the above program generates a cycle of derivations in the top-down evaluation:
A(q).
B(X):-A(X).
A(X):-B(X).
For example, while evaluating all answers to the goal <code>A(X)</code>, the top-down strategy would produce the following derivations:
A(q)
A(q):-B(q), B(q):-A(q), A(q)
A(q):-B(q), B(q):-A(q), A(q):-B(q), B(q):-A(q), A(q)
In other words, the only consequence <code>A(q)</code> is produced first, but then the algorithm cycles over derivations that do not produce any other answer. More generally, the top-down evaluation strategy may cycle over possible derivations, possibly when other ones exist.
The bottom-up strategy does not have the same drawback, as consequences that were already derived has no effect. On the above program, the bottom-up strategy starts adding <code>A(q)</code> to the set of consequences; in the second step, <code>B(X):-A(X)</code> is used to derive <code>B(q)</code>; in the third step, the only facts that can be derived from the current consequences are <code>A(q)</code> and <code>B(q)</code>, which are however already in the set of consequences. As a result, the algorithm stops.
==History==
|