Content deleted Content added
m →Application areas: clean up, replaced: Advances in neural information processing systems → Advances in Neural Information Processing Systems using AWB |
m ce |
||
Line 21:
These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid 1980s are surveyed by Smith.<ref>{{cite journal|first1=D.R.|last1=Smith|title=The synthesis of LISP programs from examples: a survey|editor1-first=A.W.|editor1-last=Biermann|editor2-first=G.|editor2-last=Guiho|publisher=Macmillan|journal=Automatic program construction techniques|pages=307–324|year=1984}}</ref> Due to limited progress with respect to the range of programs that could be synthesized, research activities decreased significantly in the next decade.
The advent of logic programming brought a new elan but also a new direction in the early 1980s, especially due to the MIS system of Shapiro<ref>{{cite book|first1=E.Y.|last1=Shapiro|title=Algorithmic program debugging|publisher=The MIT Press|year=1983}}</ref> eventually spawning the new field of inductive logic programming (ILP).<ref>{{Cite journal | last1 = Muggleton | first1 = S. | title = Inductive logic programming | doi = 10.1007/BF03037089 | journal = New Generation Computing | volume = 8 | issue = 4 | pages = 295–318 | year = 1991 | pmid = | pmc = }}</ref> The early works of Plotkin,<ref>{{cite journal|first1=Gordon D.|last1=Plotkin|title=A Note on Inductive Generalization|editor1-first=B.|editor1-last=Meltzer|editor2-first=D.|editor2-last=Michie|publisher=Edinburgh University Press|journal=Machine Intelligence|volume=5|pages=153–163|year=1970}}</ref><ref>{{cite journal|first1=Gordon D.|last1=Plotkin|title=A Further Note on Inductive Generalization|editor1-first=B.|editor1-last=Meltzer|editor2-first=D.|editor2-last=Michie|publisher=Edinburgh University Press|journal=Machine Intelligence|volume=6|pages=101–124|year=1971}}</ref> and his "''relative least general generalization (rlgg)''", had an enormous impact in inductive logic programming. Most of ILP work addresses a wider class of problems, as the focus is not only on recursive logic programs but on machine learning of symbolic hypotheses from logical representations. However, there were some encouraging results on learning recursive Prolog programs such as quicksort from examples together with suitable background knowledge, for example with GOLEM.<ref>{{cite journal|first1=S.H.|last1=Muggleton|first2=C.|last2=Feng|title=Efficient induction of logic programs|publisher=the Japanese Society for AI|journal=Proceedings of the Workshop on Algorithmic Learning Theory|volume=6|pages=368–381|year=1990}}</ref> But again, after initial success, the community got disappointed by limited progress about the induction of recursive programs<ref>{{cite journal
|first1=J.R.|last1=Quinlan|first2=R.M.|last2=Cameron-Jones
|title=Avoiding Pitfalls When Learning Recursive Theories
|publisher=
|journal=IJCAI
|pages=1050–1057
|year=1993}}
</ref><ref>
{{cite journal
|first1=J.R.|last1=Quinlan|first2=R.M.|last2=Cameron-Jones
|title=Induction of logic programs: FOIL and related systems
|publisher=Springer
|volume=13(3-4)|pages=287–312
|year=1995}}
</ref><ref>
{{cite journal
|first1=P.|last1=Flener|first2=S.|last2=Yilmaz
|title=Inductive synthesis of recursive logic programs: Achievements and prospects
|journal=The Journal of Logic Programming
|volume=41 | issue = 2
|pages=141–195
|year=1999|doi=10.1016/s0743-1066(99)00028-x}}
</ref> with ILP less and less focusing on recursive programs and leaning more and more towards a machine learning setting with applications in [[relational data mining]] and knowledge discovery.<ref>{{citation|first1=Sašo|last1=Džeroski|contribution=Inductive Logic Programming and Knowledge Discovery in Databases|pages=117–152|editor1-first=U.M.|editor1-last=Fayyad|editor2-first=G.|editor2-last=Piatetsky-Shapiro|editor3-first=P.|editor3-last=Smith|editor4-first=R.|editor4-last=Uthurusamy|title=Advances in Knowledge Discovery and Data Mining|publisher=MIT Press|year=1996}}</ref>
In parallel to work in ILP, Koza<ref>
{{cite book
|first1=J.R.|last1=Koza
|title=Genetic Programming: vol. 1, On the programming of computers by means of natural selection
|publisher=MIT Press
|year=1992}}
</ref> proposed genetic programming in the early 1990s as a generate-and-test based approach to learning programs. The idea of genetic programming was further developed into the inductive programming system ADATE<ref>
{{cite journal
|first1=J.R.|last1=Olsson
|title=Inductive functional programming using incremental program transformation
|publisher=Elsevier
|journal=Artificial Intelligence
|volume=74 | issue = 1|pages=55–83
|year=1995|doi=10.1016/0004-3702(94)00042-y}}
</ref> and the systematic-search-based system MagicHaskeller.<ref>
{{cite journal
|first1=Susumu|last1=Katayama
|title=Efficient exhaustive generation of functional programs using Monte-Carlo search with iterative deepening
|journal=PRICAI 2008: Trends in Artificial Intelligence
|pages=199–210
|year=2008}}
</ref> Here again, functional programs are learned from sets of positive examples together with an output evaluation (fitness) function which specifies the desired input/output behavior of the program to be learned.
The early work in [[grammar induction]] (also known as grammatical inference) is related to inductive programming, as rewriting systems or logic programs can be used to represent production rules. In fact, early works in inductive inference considered grammar induction and Lisp program inference as basically the same problem.<ref>
{{cite journal
|first1=D.|last1=Angluin|first2=Smith|last2=C.H.
|title=Inductive inference: Theory and methods
|publisher=ACM
|journal=ACM Computing Surveys
|volume=15|pages=237–269
|year=1983|doi=10.1145/356914.356918}}
</ref> The results in terms of learnability were related to classical concepts, such as identification-in-the-limit, as introduced in the seminal work of Gold.<ref>
{{cite journal
|first1=E.M.|last1=Gold
|title=Language identification in the limit
|publisher=Elsevier
|journal=Information and Control
|volume=10|issue=5|pages=447–474
|url=http://www.isrl.uiuc.edu/~amag/langev/paper/gold67limit.html
|year=1967|doi=10.1016/s0019-9958(67)91165-5}}
</ref> More recently, the language learning problem was addressed by the inductive programming community.<ref>{{cite journal|first1=Stephen|last1=Muggleton|title=Inductive Logic Programming: Issues, Results and the Challenge of Learning Language in Logic|journal=Artificial Intelligence|volume=114|pages=283–296|year=1999|doi=10.1016/s0004-3702(99)00067-3}}; here: Sect.2.1</ref><ref>
{{cite journal
|first1=J.R.|last1=Olsson|first2=D.M.W.|last2=Powers
|title=Machine learning of human language through automatic programming
|publisher=University of New South Wales
|journal=Proceedings of the International Conference on Cognitive Science
|pages=507–512
|year=2003}}
</ref>
Line 98:
Other ideas have also been explored with the common characteristic of using declarative languages for the representation of hypotheses. For instance, the use of higher-order features, schemes or structured distances have been advocated for a better handling of recursive data types and structures;<ref>
{{cite journal
|first1=J.W.|last1=Lloyd
|title=Knowledge Representation, Computation, and Learning in Higher-order Logic
|year=2001}}
</ref><ref>
{{cite book
|first1=J.W.|last1=Lloyd
|title=Logic for learning: learning comprehensible theories from structured data
|publisher=Springer
|year=2003}}
</ref><ref>
{{cite journal
|first1=V.|last1=Estruch|first2=C.|last2=Ferri|first3=J.|last3=Hernandez-Orallo|first4=M.J.|last4=Ramirez-Quintana
|title=Bridging the gap between distance and generalization
|publisher=Wiley
|journal=Computational Intelligence
|year=2014}}
</ref> abstraction has also been explored as a more powerful approach to [[cumulative learning]] and function invention.<ref>
{{cite journal
|first1=R.J.|last1=Henderson|first2=S.H.|last2=Muggleton
|title=Automatic invention of functional abstractions
|publisher=Imperial College Press
|journal=Advances in Inductive Logic Programming
|year=2012}}
</ref><ref>
{{cite arXiv
|first1=H.|last1=Irvin|first2=A.|last2=Stuhlmuller|first3=N.D.|last3=Goodman
|title=Inducing probabilistic programs by Bayesian program merging|arxiv=1110.5667
|year=2011}}
</ref>
One powerful paradigm that has been recently used for the representation of hypotheses in inductive programming (generally in the form of [[generative model]]s) is [[probabilistic programming language|probabilistic programming]] (and related paradigms, such as stochastic logic programs and Bayesian logic programming).<ref>
{{cite journal
|first1=S.|last1=Muggleton
|title=Learning stochastic logic programs
|journal=Electron. Trans. Artif. Intell.
|volume=4(B)
|pages=141–153
|year=2000}}
</ref><ref>
{{cite book
|first1=L.|last1=De Raedt|first2=K.|last2=Kersting
|title=Probabilistic inductive logic programming
|publisher=Springer
|year=2008}}
</ref><ref>
{{cite arXiv
|first1=H.|last1=Irvin|first2=A.|last2=Stuhlmuller|first3=N.D.|last3=Goodman
|title=Inducing probabilistic programs by Bayesian program merging|arxiv=1110.5667
|year=2011}}
</ref><ref>
{{cite journal
|first1=A.|last1=Stuhlmuller|first2=N.D.|last2=Goodman
|title=Reasoning about reasoning by nested conditioning: Modeling theory of mind with probabilistic programs
|publisher=Elsevier
|journal=Cognitive Systems Research
|year=2012}}
</ref>
Line 162:
Since then, these and many other areas have shown to be successful application niches for inductive programming, such as [[End-user development|end-user programming]],<ref>
{{cite book
|first1=H.|last1=Lieberman|first2=F.|last2=Paternò|first3=V.|last3=Wulf
|title=End user development
|publisher=Springer
|year=2006}}
</ref> the related areas of [[programming by example]]<ref>
{{cite book
|first1=H.|last1=Lieberman
|title=Your wish is my command: Programming by example
|publisher=Morgan Kaufmann
|year=2001}}
</ref> and [[programming by demonstration]],<ref>
{{cite journal
|first1=E.|last1=Cypher|first2=D.C.|last2=Halbert
|title=Watch what I do: programming by demonstration |publisher=
|journal=
|volume=|pages=
|year=}}
</ref> and [[intelligent tutoring system]]s.
Other areas where inductive inference has been recently applied are [[knowledge acquisition]],<ref>
{{cite journal
|first1=U.|last1=Schmid|first2=M.|last2=Hofmann|first3=E.|last3=Kitzelmann
|title=Analytical inductive programming as a cognitive rule acquisition devise
|journal=Proceedings of the Second Conference on Artificial General Intelligence
|pages=162–167
|year=2009}}
</ref> [[artificial general intelligence]],<ref>
{{cite journal
|first1=N.|last1=Crossley|first2=E.|last2=Kitzelmann|first3=M.|last3=Hofmann|first4=U.|last4=Schmid
|title=Combining analytical and evolutionary inductive programming
|journal=Proceedings of the Second Conference on Artificial General Intelligence
|pages=19–24
|year=2009}}
</ref> [[reinforcement learning]] and theory evaluation,<ref>
{{cite journal
|first1=J.|last1=Hernandez-Orallo
|title=Constructive reinforcement learning
|journal=International Journal of Intelligent Systems
|volume=15 | issue = 3|pages=241–264
|year=2000
|doi=10.1002/(sici)1098-111x(200003)15:3<241::aid-int6>3.0.co;2-z}}
</ref><ref>
{{cite journal
|first1=C.|last1=Kemp|first2=N.|last2=Goodman|first3=J.B.|last3=Tenenbaum
|title=Learning and using relational theories
|journal=Advances in Neural Information Processing Systems
|pages=753–760
|year=2007}}
</ref> and [[cognitive science]] in general.<ref>
{{cite journal
|first1=U.|last1=Schmid|first2=E.|last2=Kitzelmann
|title=Inductive rule learning on the knowledge level
|journal=Cognitive Systems Research
|volume=12 | issue = 3
|pages=237–248
|year=2011|doi=10.1016/j.cogsys.2010.12.002}}
</ref><ref>
{{cite journal
|first1=A.|last1=Stuhlmuller|first2=N.D.|last2=Goodman
|title=Reasoning about reasoning by nested conditioning: Modeling theory of mind with probabilistic programs
|publisher=Elsevier
|journal=Cognitive Systems Research
|year=2012}}
</ref> There may also be prospective applications in intelligent agents, games, robotics, personalisation, ambient intelligence and human interfaces.
|