Declarative programming is an approach to computer programming that involves the creation of a set of conditions that describe a solution space, but leaves the interpretation of the specific steps needed to arrive at that solution up to an unspecified interpreter. Declarative programming thus takes a different approach from the traditional imperative programming found in Fortran, C++ or Java which requires the programmer to provide a list of instructions to execute in a specified order.
In other words, declarative programming provides the what, but leaves the how up to interpretation. Advantages of this approach are that
- it isolates the complex problem solving for the computer,
- it helps avoid the reinvention of the wheel problem,
- it allows for re-use/re-interpretation in different contexts (e.g. parallel processing), and
- it centralizes and condenses the problem definition thereby making for more comprehensible coding practices.
Declarative programming includes both functional programming and logic programming.
Declarative programming was also known as Value-oriented programming, but this term has lately fallen out of use.
Declarative languages describe relationships between variables in terms of functions, inference rules, or term-rewriting rules. The language executor (an interpreter or compiler) applies a fixed algorithm to these relations to produce a result.
Declarative programming languages are extensively used in solving artificial intelligence and constraint-satisfaction problems as well as more mundane areas such as databases and configuration management.
Example languages
Representative examples of declarative programming languages include Prolog, Lisp, and Haskell. Other examples include Miranda, and SQL.
Category:Declarative programming languages provides an exhaustive list.
See also
- Imperative programming (contrast)
- Programming paradigms