Symbolic artificial intelligence: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: arxiv updated in citation with #oabot.
m Machine learning: clean up, replaced: | journal = International Journal of Artificial Intelligence in Education (IJAIED) → | journal = International Journal of Artificial Intelligence in Education
Line 146:
Advances were made in understanding machine learning theory, too. [[Tom M. Mitchell|Tom Mitchell]] introduced [[version space learning]] which describes learning as a search through a space of hypotheses, with upper, more general, and lower, more specific, boundaries encompassing all viable hypotheses consistent with the examples seen so far.<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983 |c=Chapter 6: Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics |first1=Tom M. |last1=Mitchell |first2=Paul E. |last2=Utgoff |first3=Ranan |last3=Banerji}}</ref> More formally, [[Leslie Valiant|Valiant]] introduced [[Probably approximately correct learning|Probably Approximately Correct Learning]] (PAC Learning), a framework for the mathematical analysis of machine learning.<ref>{{Cite journal| doi = 10.1145/1968.1972| issn = 0001-0782| volume = 27| issue = 11| pages = 1134–1142| last = Valiant| first = L. G.| title = A theory of the learnable| journal = Communications of the ACM| date = 1984-11-05| s2cid = 12837541| doi-access = free}}</ref>
 
Symbolic machine learning encompassed more than learning by example. E.g., [[John Robert Anderson (psychologist)|John Anderson]] provided a [[cognitive model]] of human learning where skill practice results in a compilation of rules from a declarative format to a procedural format with his [[ACT-R]] [[cognitive architecture]]. For example, a student might learn to apply "Supplementary angles are two angles whose measures sum 180 degrees" as several different procedural rules. E.g., one rule might say that if X and Y are supplementary and you know X, then Y will be 180 - X. He called his approach "knowledge compilation". [[ACT-R]] has been used successfully to model aspects of human cognition, such as learning and retention. ACT-R is also used in [[intelligent tutoring systems]], called [[cognitive tutors]], to successfully teach geometry, computer programming, and algebra to school children.<ref "pump"="">{{Cite journal| volume = 8| pages = 30–43| last1 = Koedinger| first1 = K. R.| last2 = Anderson| first2 = J. R.| last3 = Hadley| first3 = W. H.| last4 = Mark| first4 = M. A.| last5 = others| title = Intelligent tutoring goes to school in the big city| journal = International Journal of Artificial Intelligence in Education (IJAIED)| accessdate = 2012-08-18| date = 1997| url = http://telearn.archives-ouvertes.fr/hal-00197383/}}</ref>
 
Inductive logic programming was another approach to learning that allowed logic programs to be synthesized from input-output examples. E.g., [[Ehud Shapiro]]'s MIS (Model Inference System) could synthesize Prolog programs from examples.<ref>{{Cite conference| conference = IJCAI| volume = 2| pages = 1064| last = Shapiro| first = Ehud Y| title = The Model Inference System| book-title = Proceedings of the 7th international joint conference on Artificial intelligence| date = 1981}}</ref> [[John R. Koza]] applied [[genetic algorithms]] to [[program synthesis]] to create [[genetic programming]], which he used to synthesize LISP programs. Finally, [[Zohar Manna]] and [[Richard Waldinger]] provided a more general approach to [[program synthesis]] that synthesizes a [[functional programming|functional program]] in the course of proving its specifications to be correct.<ref>{{Cite journal| doi = 10.1145/357084.357090| volume = 2| pages = 90–121| last1 = Manna| first1 = Zohar| last2 = Waldinger| first2 = Richard| title = A Deductive Approach to Program Synthesis| journal = ACM Trans. Program. Lang. Syst.| date = 1980-01-01| issue = 1| s2cid = 14770735}}</ref>