Content deleted Content added
Add a summary of the definition of COP |
Gokberkkocak (talk | contribs) Added constraint solving |
||
Line 2:
{{Programming paradigms}}
'''Constraint programming (CP)'''<ref name=":0">{{Cite book|url=https://books.google.com/books?hl=el&lr=&id=Kjap9ZWcKOoC&oi=fnd&pg=PP1&dq=handbook+of+constraint+programming&ots=QDCfyQ_UNi&sig=ecUEH3Z_cjkDCI5LsDlZ00zfNWs|title=Handbook of Constraint Programming|last=Rossi|first=Francesca|last2=Beek|first2=Peter van|last3=Walsh|first3=Toby|date=2006-08-18|publisher=Elsevier|isbn=9780080463803|language=en}}</ref> is a paradigm for solving combinatorial search problems that draws on a wide range of techniques from artificial intelligence, [[computer science]], and [[operations research]]. Constraint programming is “programming” partly in the sense of programming in [[Mathematical optimization|mathematical programming]]. The user declaratively states the constraints on the feasible solutions for a set of decision variables. However, constraint programming is also “programming” in the sense of [[computer programming]]. The user additionally needs to program a search strategy. This typically draws upon standard methods like chronological [[backtracking]] and [[constraint propagation]], but may use customized code like a problem specific branching [[Heuristic (computer science)|heuristic]].
Constraint Programming brings together a wide array of disciplines including [[Algorithm|algorithms]], [[artificial intelligence]], [[combinatorial optimization]], [[Constraint satisfaction problem|constraint satisfaction problems]], [[computational logic]], [[operations research]], and [[Programming language|programming languages]].
Line 106:
Every local consistency condition can be enforced by a transformation that changes the problem without changing its solutions. Such a transformation is called [[Constraint propagation|'''constraint propagation''']]<ref>{{Citation|last=Bessiere|first=Christian|title=Constraint Propagation|date=2006|url=http://dx.doi.org/10.1016/s1574-6526(06)80007-6|work=Handbook of Constraint Programming|pages=29–83|publisher=Elsevier|isbn=9780444527264|access-date=2019-10-04}}</ref>. Constraint propagation works by reducing domains of variables, strengthening constraints, or creating new ones. This leads to a reduction of the search space, making the problem easier to solve by some algorithms. Constraint propagation can also be used as an unsatisfiability checker, incomplete in general but complete in some particular cases.
There are three main algorithmic techniques for solving constraint satisfaction problems: backtracking search, local search, and dynamic programming<ref name=":0" />.
=== Backtracking search ===
{{Main|Backtracking}}
'''Backtracking search''' is a general [[algorithm]] for finding all (or some) solutions to some [[Computational problem|computational problems]], notably [[Constraint satisfaction problem|constraint satisfaction problems]], that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
=== Local Search ===
{{Main|Local search (constraint satisfaction)}}
'''Local search''' is an incomplete method for finding a solution to a [[Constraint satisfaction problem|problem]]. It is based on iteratively improving an assignment of the variables until all constraints are satisfied. In particular, local search algorithms typically modify the value of a variable in an assignment at each step. The new assignment is close to the previous one in the space of assignment, hence the name ''local search''.
=== Dynamic programming ===
{{Main|Dynamic programming}}
'''Dynamic programming''' is both a [[mathematical optimization]] method and a computer programming method. It refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a [[Recursion|recursive]] manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have [[optimal substructure]].
== Example ==
The syntax for expressing constraints over finite domains depends on the host language. The following is a [[Prolog]] program that solves the classical [[alphametic]] puzzle [[Verbal arithmetic|SEND+MORE=MONEY]] in constraint logic programming:
<source lang="prolog">
|