Content deleted Content added
m Remove weird content quadruplication |
m More duplication??? |
||
Line 22:
=== Learning from entailment ===
{{As of|2022}}, learning from entailment is by far the most popular setting for inductive logic programming.<ref name="setting" /> In this setting, the ''positive'' and ''negative'' examples are given as finite sets <math display="inline">E^+</math> and <math display="inline">E^{-}</math> of positive and negated [[Ground expression|ground]] [[Literal (mathematical logic)|literals]], respectively. A ''correct hypothesis'' ''{{mvar|H}}'' is a set of clauses satisfying the following requirements, where the turnstile symbol <math>\models</math> stands for [[logical entailment]]:<ref name="setting" /><ref>{{cite book |last1=Džeroski |first1=Sašo |title=Advances in Knowledge Discovery and Data Mining |publisher=MIT Press |year=1996 |editor1-last=Fayyad |editor1-first=U.M. |pages=117–152 See §5.2.4 |chapter=Inductive Logic Programming and Knowledge Discovery in Databases |access-date=2021-09-27 |editor2-last=Piatetsky-Shapiro |editor2-first=G. |editor3-last=Smith |editor3-first=P. |editor4-last=Uthurusamy |editor4-first=R. |chapter-url=http://kt.ijs.si/SasoDzeroski/pdfs/1996/Chapters/1996_InductiveLogicProgramming.pdf |archive-url=https://web.archive.org/web/20210927141157/http://kt.ijs.si/SasoDzeroski/pdfs/1996/Chapters/1996_InductiveLogicProgramming.pdf |archive-date=2021-09-27 |url-status=dead}}</ref><ref>{{Cite journal |last=De Raedt |first=Luc |date=1997 |title=Logical settings for concept-learning |url=https://linkinghub.elsevier.com/retrieve/pii/S0004370297000416 |journal=Artificial Intelligence |language=en |volume=95 |issue=1 |pages=187–201 |doi=10.1016/S0004-3702(97)00041-6}}</ref>
<math display="block">\begin{array}{llll} \text{Completeness:}
& B \cup H
& \models
& E^+
\\
\text{Consistency: }
Line 33:
& \not\models
& \textit{false}
\end{array}</math>
In Muggleton's setting of concept learning,<ref name="setting2">{{cite journal |last1=Muggleton |first1=Stephen |year=1999 |title=Inductive Logic Programming: Issues, Results and the Challenge of Learning Language in Logic |journal=Artificial Intelligence |volume=114 |issue=1–2 |pages=283–296 |doi=10.1016/s0004-3702(99)00067-3 |doi-access=}}; here: Sect.2.1</ref> "completeness" is referred to as "sufficiency", and "consistency" as "strong consistency". Two further conditions are added: "''Necessity''", which postulates that ''{{mvar|B}}'' does not entail <math display="inline">E^+</math>, does not impose a restriction on ''{{mvar|h}}'', but forbids any generation of a hypothesis as long as the positive facts are explainable without it. . "Weak consistency", which states that no contradiction can be derived from <math display="inline">B\land H</math>, forbids generation of any hypothesis ''{{mvar|h}}'' that contradicts the background knowledge ''{{mvar|B}}''. Weak consistency is implied by strong consistency; if no negative examples are given, both requirements coincide. Weak consistency is particularly important in the case of noisy data, where completeness and strong consistency cannot be guaranteed.<ref name="setting2" />
|