Symbolic artificial intelligence: Difference between revisions

Content deleted Content added
0066cc (talk | contribs)
m Minor casing correction.
m Minor Grammar corrections for improved readability. added link
Tags: Reverted Visual edit
Line 6:
Symbolic AI was the dominant [[paradigm]] of AI research from the mid-1950s until the mid-1990s.{{sfn|Kolata|1982}} Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with [[artificial general intelligence]] and considered this the ultimate goal of their field.{{Citation needed|date=March 2024}} An early boom, with early successes such as the [[Logic Theorist]] and [[Arthur Samuel (computer scientist)|Samuel]]'s [[Arthur Samuel (computer scientist)|Checkers Playing Program]], led to unrealistic expectations and promises and was followed by the first [[AI winter|AI Winter]] as funding dried up.{{sfn|Kautz|2022|pp=107-109}}{{sfn|Russell |Norvig|2021|p=19}} A second boom (1969–1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.{{sfn|Russell |Norvig|2021|pp=22-23}}{{sfn|Kautz|2022|pp=109-110}} That boom, and some early successes, e.g., with [[XCON]] at [[Digital Equipment Corporation|DEC]], was followed again by later disappointment.{{sfn|Kautz|2022|pp=109-110}} Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-___domain problems arose. Another, second, AI Winter (1988–2011) followed.{{sfn|Kautz|2022|p=110}} Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.{{sfn|Kautz|2022|pp=110-111}} Uncertainty was addressed with formal methods such as [[hidden Markov model]]s, [[Bayesian reasoning]], and [[statistical relational learning]].{{sfn|Russell |Norvig|2021|p=25}}{{sfn|Kautz|2022|p=111}} Symbolic machine learning addressed the knowledge acquisition problem with contributions including [[Version space learning|Version Space]], [[Leslie Valiant|Valiant]]'s [[Probably approximately correct learning|PAC learning]], [[Ross Quinlan|Quinlan]]'s [[ID3 algorithm|ID3]] [[decision-tree]] learning, [[Case-based reasoning|case-based learning]], and [[inductive logic programming]] to learn relations.{{sfn|Kautz|2020|pp=110-111}}
 
[[Artificial neural network|Neural networks]], a subsymbolic approach, had been pursued from early days and reemerged strongly in 2012. Early examples are [[Frank Rosenblatt|Rosenblatt]]'s [[perceptron]] learning work, the [[backpropagation]] work of Rumelhart, Hinton and Williams,<ref>{{cite journal| doi = 10.1038/323533a0| issn = 1476-4687| volume = 323| issue = 6088| pages = 533–536| last1 = Rumelhart| first1 = David E.| last2 = Hinton| first2 = Geoffrey E.| last3 = Williams| first3 = Ronald J.| title = Learning representations by back-propagating errors| journal = Nature| date = 1986 | bibcode = 1986Natur.323..533R| s2cid = 205001834}}</ref> and work in [[convolutional neural network]]s by LeCun et al. in 1989.<ref>{{Cite journal| volume = 1| issue = 4| pages = 541–551| last1 = LeCun| first1 = Y.| last2 = Boser| first2 = B.| last3 = Denker| first3 = I.| last4 = Henderson| first4 = D.| last5 = Howard| first5 = R.| last6 = Hubbard| first6 = W.| last7 = Tackel| first7 = L.| title = Backpropagation Applied to Handwritten Zip Code Recognition| journal = Neural Computation| date = 1989| doi = 10.1162/neco.1989.1.4.541| s2cid = 41312633}}</ref> However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of [[GPUs]] to enormously increase the power of neural networks."{{sfn|Marcus |Davis|2019}} Over the next several years, [[deep learning]] had spectacular success in handling vision, [[speech recognition]], speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties withfrom bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for [[Neuro-symbolic AI|combining]] the best of both the symbolic and neural network approaches<ref name="Rossi">
{{cite web |last1=Rossi |first1=Francesca |title=Thinking Fast and Slow in AI |url=https://aaai-2022.virtualchair.net/plenary_13.html |publisher=AAAI |access-date=5 July 2022}}</ref><ref name="Selman">
{{cite web |last1=Selman |first1=Bart |title=AAAI Presidential Address: The State of AI |url=https://aaai-2022.virtualchair.net/plenary_2.html |publisher=AAAI |access-date=5 July 2022}}</ref> and, addressing areas that both approaches have difficulty with, such as [[Commonsense reasoning|common-sense reasoning]].{{sfn|Marcus |Davis|2019}}
 
== History ==
Line 42:
[[Stuart J. Russell|Stuart Russell]] and [[Peter Norvig]] wrote "Aeronautical engineering texts do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool even other pigeons.'"{{sfn|Russell|Norvig|2021|p=2}}}}
His laboratory at [[Stanford University|Stanford]] ([[Stanford Artificial Intelligence Laboratory|SAIL]]) focused on using formal [[logic]] to solve a wide variety of problems, including [[knowledge representation]], planning and [[machine learning|learning]].{{sfn|McCorduck|2004|pp=251–259}}
Logic was also the focus of the work at the [[University of Edinburgh]] and elsewhere in Europe which led to the development of the programming language [[Prolog]] and the science of [[logic programming]].{{sfn|Crevier|1993|pp=193–196}}{{sfn|Howe|1994}}
 
===== Modeling implicit common-sense knowledge with frames and scripts: the "scruffies" =====
Line 85:
* [[Internist-I|INTERNIST]] and [[CADUCEUS (expert system)|CADUCEUS]] which tackled internal medicine diagnosis. Internist attempted to capture the expertise of the chairman of internal medicine at the [[University of Pittsburgh School of Medicine]] while CADUCEUS could eventually diagnose up to 1000 different diseases.
* GUIDON, which showed how a knowledge base built for expert problem solving could be repurposed for teaching.{{sfn|Clancey|1987}}
* [[XCON]], to configure [[VAX|VAX computers]], a then laborious process that could take up to 90 days. XCON reduced the time to about 90 minutes.{{sfn|Kautz|2022|p=110}}
 
[[DENDRAL]] is considered the first expert system that relied on knowledge-intensive problem-solving. It is described below, by [[Ed Feigenbaum]], from a [[Communications of the ACM]] interview, [https://cacm.acm.org/magazines/2010/6/92472-an-interview-with-ed-feigenbaum/fulltext|An Interview with Ed Feigenbaum]: