Probabilistic logic programming: Difference between revisions

Content deleted Content added
m citefix
Recursion and negation: Elaborate and cite
Line 7:
 
== Languages ==
The distribution semantics underlies many languages such as Probabilistic Horn Abduction, PRISM, Independent Choice Logic , probabilistic [[Datalog]], Logic Programs with Annotated Disjunctions (Vennekens et al., 2004), [[ProbLog]], P-log, and CP-logic. While the number of languages is large, all share a common approach so that there are transformations with linear complexity that can translate one language into another.<ref name=":0">{{Cite journal |last=Riguzzi |first=Fabrizio |last2=Bellodi |first2=Elena |last3=Zese |first3=Riccardo |date=2014 |title=A History of Probabilistic Inductive Logic Programming |url=https://www.frontiersin.org/articles/10.3389/frobt.2014.00006 |journal=Frontiers in Robotics and AI |volume=1 |doi=10.3389/frobt.2014.00006/full |issn=2296-9144}}</ref>
 
== Recursion and negation ==
 
=== Stratified programs ===
Under the distribution semantics, a probabilistic logic program consists of a set of probabilistic facts and a logic program which can use the probabilistic facts in the body. If for any choice of truth values for the probabilistic facts, the resulting logic program is [[Stratified logic program|stratified]], it has a unique minimal [[Herbrand model]] which can be seen as the unique interpretation associated with that choice of truth values.
 
Important subclasses of stratified programs are positive programs, which do not use negation, but may be recursive, and acyclic programs, which may use negation but have no recursive dependencies.
 
=== Answer set programs ===
The [[stable model semantics]] underlying [[answer set programming]] gives meaning to unstratified programs by allocating potentially more than one answer set to every truth value assignment of the probabilistic facts. This raises the question of how to distribute the probability mass across the answer sets.<ref name=":1">{{Citation |last=Riguzzi |first=Fabrizio |title=Probabilistic Answer Set Programming |date=2023-05-22 |work=Foundations of Probabilistic Logic Programming |pages=165–173 |url=http://dx.doi.org/10.1201/9781003427421-6 |access-date=2024-02-03 |place=New York |publisher=River Publishers |isbn=978-1-003-42742-1}}</ref><ref name=":2">{{Cite journal |last=Cozman |first=Fabio Gagliardi |last2=Mauá |first2=Denis Deratani |date=2020 |title=The joy of Probabilistic Answer Set Programming: Semantics, complexity, expressivity, inference |url=https://linkinghub.elsevier.com/retrieve/pii/S0888613X20302012 |journal=International Journal of Approximate Reasoning |language=en |volume=125 |pages=218–239 |doi=10.1016/j.ijar.2020.07.004}}</ref>
 
The probabilistic logic programming language P-Log resolves this by dividing the probability mass equally between the answer sets, following the [[principle of indifference]].<ref name=":1" /><ref>{{Cite journal |last=Baral |first=Chitta |last2=Gelfond |first2=Michael |last3=Rushton |first3=Nelson |date=2009 |title=Probabilistic reasoning with answer sets |url=https://www.cambridge.org/core/product/identifier/S1471068408003645/type/journal_article |journal=Theory and Practice of Logic Programming |language=en |volume=9 |issue=1 |pages=57–144 |doi=10.1017/S1471068408003645 |issn=1471-0684}}</ref>
 
Alternatively, probabilistic answer set programming under the credal semantics allocates a ''[[credal set]]'' to every query. Its lower probability bound is defined by only considering those truth value assignments of the probabilistic facts for which the query is true in every answer set of the resulting program (cautious reasoning); its upper probability bound is defined by considering those assignments for which the query is true in some answer set (brave reasoning).<ref name=":1" /><ref name=":2" />
 
== Inference ==
Line 29 ⟶ 37:
[[Probabilistic inductive logic programming]] aims to learn probabilistic logic programs from data. This includes parameter learning, which estimates the probability annotations of a program while the clauses themselves are given by the user, and structure learning, in which the clauses themselves are induced by the probabilistic inductive logic programming system.<ref name=":0" />
 
Common approaches to parameter learning are based on [[Expectation–maximization algorithm|expectation–maximization]] or [[gradient descent]], while structure learning van beperformed by searching the space of possible clauses under a variety of heuristics.<ref name=":0" />
 
== Applications ==
 
== See also ==
* [[ProbLog]]
* [[Probabilistic database]]
* [[Statistical relational learning]]
* [[Inductive logic programming]]
 
== References ==