Content deleted Content added
m linking |
Citation bot (talk | contribs) Altered title. Add: chapter-url, chapter. Removed or converted URL. Removed access-date with no URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Anas1712 | #UCB_webform 1103/1127 |
||
Line 2:
{{Artificial intelligence|Approaches}}
[[File:Artificial-Intelligence.jpg|thumb|right|alt=An artistic representation of AI where a cross section of a human head and brain in profile is mixed with a circuit like background and overlay|An artistic representation of AI]]
In [[artificial intelligence]], '''symbolic artificial intelligence''' is the term for the collection of all methods in artificial intelligence research that are based on high-level [[physical symbol systems hypothesis|symbolic]] (human-readable) representations of problems, [[Formal logic|logic]] and [[search algorithm|search]].<ref>{{Cite journal|last1=Garnelo|first1=Marta|last2=Shanahan|first2=Murray|date=2019-10-01|title=Reconciling deep learning with symbolic artificial intelligence: representing objects and relations
Symbolic AI was the dominant [[paradigm]] of AI research from the mid-1950s until the mid-1990s.{{sfn|Kolata|1982}} Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with [[artificial general intelligence]] and considered this the ultimate goal of their field.{{sfn|Russell|Norvig|2021|p=24}} An early boom, with early successes such as the [[Logic Theorist]] and [[Arthur Samuel (computer scientist)|Samuel]]'s [[Arthur Samuel (computer scientist)|Checkers Playing Program]], led to unrealistic expectations and promises and was followed by the First [[AI winter|AI Winter]] as funding dried up.{{sfn|Kautz|2022|pp=107-109}}{{sfn|Russell |Norvig|2021|p=19}} A second boom (1969–1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.{{sfn|Russell |Norvig|2021|pp=22-23}}{{sfn|Kautz|2022|pp=109-110}} That boom, and some early successes, e.g., with [[XCON]] at [[Digital Equipment Corporation|DEC]], was followed again by later disappointment.{{sfn|Kautz|2022|pp=109-110}} Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-___domain problems arose. Another, second, AI Winter (1988–2011) followed.{{sfn|Kautz|2022|p=110}} Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.{{sfn|Kautz|2022|pp=110-111}} Uncertainty was addressed with formal methods such as [[hidden Markov model]]s, [[Bayesian reasoning]], and [[statistical relational learning]].{{sfn|Russell |Norvig|2021|p=25}}{{sfn|Kautz|2022|p=111}} Symbolic machine learning addressed the knowledge acquisition problem with contributions including [[Version space learning|Version Space]], [[Leslie Valiant|Valiant]]'s [[Probably approximately correct learning|PAC learning]], [[Ross Quinlan|Quinlan]]'s [[ID3 algorithm|ID3]] [[decision-tree]] learning, [[Case-based reasoning|case-based learning]], and [[inductive logic programming]] to learn relations.{{sfn|Kautz|2020|pp=110-111}}
Line 146:
In contrast to the knowledge-intensive approach of Meta-DENDRAL, [[Ross Quinlan]] invented a ___domain-independent approach to statistical classification, [[decision tree learning]], starting first with [[ID3 algorithm|ID3]]<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983|c=Chapter 15: Learning Efficient Classification Procedures and their Application to Chess End Games |first=J. Ross |last=Quinlan}}</ref> and then later extending its capabilities to [[C4.5]].<ref>{{Cite book| edition = 1st | publisher = Morgan Kaufmann| isbn = 978-1-55860-238-0| last = Quinlan| first = J. Ross| title = C4.5: Programs for Machine Learning| ___location = San Mateo, Calif| date = 1992-10-15}}</ref> The decision trees created are [[glass box]], interpretable classifiers, with human-interpretable classification rules.
Advances were made in understanding machine learning theory, too. [[Tom M. Mitchell|Tom Mitchell]] introduced [[version space learning]] which describes learning as search through a space of hypotheses, with upper, more general, and lower, more specific, boundaries encompassing all viable hypotheses consistent with the examples seen so far.<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983 |c=Chapter 6: Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics |first1=Tom M. |last1=Mitchell |first2=Paul E. |last2=Utgoff |first3=Ranan |last3=Banerji}}</ref> More formally, [[Leslie Valiant|Valiant]] introduced [[Probably approximately correct learning|Probably Approximately Correct Learning]] (PAC Learning), a framework for the mathematical analysis of machine learning.<ref>{{Cite journal| doi = 10.1145/1968.1972| issn = 0001-0782| volume = 27| issue = 11| pages = 1134–1142| last = Valiant| first = L. G.| title = A theory of the learnable| journal = Communications of the ACM
Symbolic machine learning encompassed more than learning by example. E.g., [[John Robert Anderson (psychologist)|John Anderson]] provided a [[cognitive model]] of human learning where skill practice results in a compilation of rules from a declarative format to a procedural format with his [[ACT-R]] [[cognitive architecture]]. For example, a student might learn to apply "Supplementary angles are two angles whose measures sum 180 degrees" as several different procedural rules. E.g., one rule might say that if X and Y are supplementary and you know X, then Y will be 180 - X. He called his approach "knowledge compilation". [[ACT-R]] has been used successfully to model aspects of human cognition, such as learning and retention. ACT-R is also used in [[intelligent tutoring systems]], called [[cognitive tutors]], to successfully teach geometry, computer programming, and algebra to school children.<ref "pump"="">{{Cite journal| volume = 8| pages = 30–43| last1 = Koedinger| first1 = K. R.| last2 = Anderson| first2 = J. R.| last3 = Hadley| first3 = W. H.| last4 = Mark| first4 = M. A.| last5 = others| title = Intelligent tutoring goes to school in the big city| journal = International Journal of Artificial Intelligence in Education (IJAIED)| accessdate = 2012-08-18| date = 1997| url = http://telearn.archives-ouvertes.fr/hal-00197383/}}</ref>
Line 486:
* {{Cite journal |doi=10.1145/360018.360022 |last1=Newell |first1=Allen |last2=Simon |first2=H. A. |year=1976 |title=Computer Science as Empirical Inquiry: Symbols and Search |volume=19 |issue=3 |pages=113–126 |journal=Communications of the ACM| author-link=Allen Newell |authorlink2=Herbert A. Simon |doi-access=free}}
* {{cite book|last=Nilsson|first=Nils|author-link=Nils Nilsson (researcher)|year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann|isbn=978-1-55860-467-4 |url=https://archive.org/details/artificialintell0000nils|url-access=registration|access-date=18 November 2019|url-status=live|archive-date=26 July 2020|archive-url=https://web.archive.org/web/20200726131654/https://archive.org/details/artificialintell0000nils}}
* {{Citation |last=Olazaran |first=Mikel |title=Advances in Computers Volume 37 |chapter=A Sociological History of the Neural Network Controversy |date=1993-01-01 |chapter-url=https://www.sciencedirect.com/science/article/pii/S0065245808604088
* {{cite book |last=Pearl |first=J. |year=1988 |title=Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference |___location=San Mateo, California |publisher=Morgan Kaufmann |isbn=978-1-55860-479-7 |oclc=249625842}}
* {{Cite book|first1=Stuart J.|last1=Russell|author1-link=Stuart J. Russell|first2=Peter|last2=Norvig |author2-link=Peter Norvig|title=[[Artificial Intelligence: A Modern Approach]]|year=2021|edition=4th |isbn=978-0-13-461099-3 |lccn=20190474|publisher=Pearson|___location=Hoboken}}
|