Content deleted Content added
m Felix QW moved page Draft:Probabilistic logic programming to Probabilistic logic programming: Move to mainspace |
Citation bot (talk | contribs) Alter: doi, issue. Add: pages, doi-access, s2cid, isbn, doi, authors 1-1. Removed proxy/dead URL that duplicated identifier. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar |
||
Line 3:
Most approaches to probabilistic logic programming are based on the ''distribution semantics,'' which splits a program into a set of probabilistic facts and a logic program. It defines a probability distribution on interpretations of the [[Herbrand structure|Herbrand universe]] of the program.
== Languages ==
Most approaches to probabilistic logic programming are based on the ''distribution semantics,''<ref name=":3">{{Citation |
== Semantics ==
Under the distribution semantics, a probabilistic logic program is interpreted as a set of independent probabilistic facts ([[Ground expression|ground]] [[Atomic formula|atomic formulas]] annotated with a probability) and a [[logic program]] which can use the probabilistic facts in the bodies of its clauses. The probability of any assignment of truth values to the groundings of the formulas associated with probabilistic facts is given by the product of their probabilities; this is equivalent to assuming the choices of probabilistic facts to be [[independent random variables]].<ref name=":3" /><ref>{{Cite journal |
=== Stratified programs ===
Line 14:
=== Answer set programs ===
The [[stable model semantics]] underlying [[answer set programming]] gives meaning to unstratified programs by allocating potentially more than one answer set to every truth value assignment of the probabilistic facts. This raises the question of how to distribute the probability mass across the answer sets.<ref name=":1">{{Citation |last=Riguzzi |first=Fabrizio |title=Probabilistic Answer Set Programming |date=2023-05-22 |work=Foundations of Probabilistic Logic Programming |pages=165–173 |url=http://dx.doi.org/10.1201/9781003427421-6 |access-date=2024-02-03 |place=New York |publisher=River Publishers |doi=10.1201/9781003427421-6 |isbn=978-1-003-42742-1}}</ref><ref name=":2">{{Cite journal |
The probabilistic logic programming language P-Log resolves this by dividing the probability mass equally between the answer sets, following the [[principle of indifference]].<ref name=":1" /><ref>{{Cite journal |
Alternatively, probabilistic answer set programming under the credal semantics allocates a ''[[credal set]]'' to every query. Its lower probability bound is defined by only considering those truth value assignments of the probabilistic facts for which the query is true in every answer set of the resulting program (cautious reasoning); its upper probability bound is defined by considering those assignments for which the query is true in some answer set (brave reasoning).<ref name=":1" /><ref name=":2" />
== Inference ==
Under the distribution semantics, a probabilistic logic program defines a probability distribution over [[Interpretation (logic)|interpretations]] of its predicates on its [[Herbrand Universe|Herbrand universe]]. The probability of a [[Ground expression|ground]] query is then obtained from the [[Joint probability distribution|joint distribution]] of the query and the worlds: it is the sum of the probability of the worlds where the query is true.<ref name=":0" /><ref>{{Cite journal |last=Poole |first=David |date=1993 |title=Probabilistic Horn abduction and Bayesian networks |url=http://dx.doi.org/10.1016/0004-3702(93)90061-f |journal=Artificial Intelligence |volume=64 |issue=1 |pages=81–129 |doi=10.1016/0004-3702(93)90061-f |issn=0004-3702}}</ref><ref>{{Citation |last=Sato |first=Taisuke |title=A Statistical Learning Method for Logic Programs with Distribution Semantics |date=1995 |work=Proceedings of the 12th International Conference on Logic Programming |pages=715–730 |url=http://dx.doi.org/10.7551/mitpress/4298.003.0069 |access-date=2023-10-25 |publisher=The MIT Press|doi=10.7551/mitpress/4298.003.0069 |isbn=978-0-262-29143-9 }}</ref>
The problem of computing the probability of queries is called ''(marginal) inference''. Solving it by computing all the worlds and then identifying those that entail the query is impractical as the number of possible worlds is exponential in the number of ground probabilistic facts.<ref name=":0" /> In fact, already for acyclic programs and [[Atomic formula|atomic]] queries, computing the conditional probability of a query given a conjunction of atoms as evidence is [[♯P|#P]]-complete.<ref>{{Cite book |last=Riguzzi |first=Fabrizio |title=Foundations of probabilistic logic programming: Languages, semantics, inference and learning |publisher=[[River Publishers]] |year=2023 |isbn=978-87-7022-719-3 |edition=2nd |___location=Gistrup, Denmark |pages=180}}</ref>
Line 29:
=== Approximate inference ===
Since the cost of inference may be very high, approximate algorithms have been developed. They either compute subsets of possibly incomplete explanations or use random sampling. In the first approach, a subset of the explanations provides a lower bound and the set of partially expanded explanations provides an upper bound. In the second approach, the truth of the query is repeatedly checked in an ordinary [[logic program]] sampled from the probabilistic program. The probability of the query is then given by the fraction of the successes.<ref name=":0" /><ref>{{Cite journal |
== Learning ==
Line 46:
== References ==
{{reflist}}
{{dual|date=3 February 2024|source={{Cite journal |
{{Programming paradigms navbox}}
|