Content deleted Content added
→Top-down search: Add short section on Metainterpretive learning |
m Grammar. Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit |
||
Line 115:
Structure learning was pioneered by [[Daphne Koller]] and Avi Pfeffer in 1997,<ref>{{Cite conference |last1=Koller |first1=Daphne |last2=Pfeffer |first2=Avi |date=August 1997 |title=Learning probabilities for noisy first-order rules |url=http://www.robotics.stanford.edu/~koller/Papers/Koller+Pfeffer:IJCAI97.pdf |conference=[[IJCAI]]}}</ref> where the authors learn the structure of [[First-order logic|first-order]] rules with associated probabilistic uncertainty parameters. Their approach involves generating the underlying [[graphical model]] in a preliminary step and then applying expectation-maximisation.<ref name="pilp" />
In 2008, [[Luc De Raedt|De Raedt]] et al. presented an algorithm for performing [[theory compression]] on [[ProbLog]] programs, where theory compression means removing as many clauses as possible from the theory in order to maximize the probability
In the same year, Meert, W. et al. introduced a method for learning parameters and structure of [[Ground term|ground]] probabilistic logic programs by considering the [[Bayesian network]]s equivalent to them and applying techniques for learning Bayesian networks.<ref>{{Citation |last1=Blockeel |first1=Hendrik |title=Towards Learning Non-recursive LPADs by Transforming Them into Bayesian Networks |url=http://dx.doi.org/10.1007/978-3-540-73847-3_16 |work=Inductive Logic Programming |pages=94–108 |access-date=2023-12-09 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |isbn=978-3-540-73846-6 |last2=Meert |first2=Wannes|series=Lecture Notes in Computer Science |date=2007 |volume=4455 |doi=10.1007/978-3-540-73847-3_16 }}</ref><ref name="pilp" />
|