Content deleted Content added
Tags: Mobile edit Mobile app edit |
Citation bot (talk | contribs) Altered title. Add: authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 706/967 |
||
(541 intermediate revisions by more than 100 users not shown) | |||
Line 1:
{{Short description|none}}
{{See also|History of artificial intelligence|Progress in artificial intelligence}}
{{Use dmy dates|date=August 2019}}
[[File:Ai training compute doubling v2.png|thumb|The training computation of notable AI systems trough time]]
This is a timeline of [[artificial intelligence]], sometimes alternatively called [[synthetic intelligence]].
==Antiquity, Classical and Medieval eras==
{| class="wikitable"
|-
! Date
! Development
|-
! rowspan=2 | Antiquity
|Greek myths of [[Hephaestus]] and [[Pygmalion (mythology)|Pygmalion]] incorporated the idea of intelligent [[automata]] (such as [[Talos]]) and artificial beings (such as [[Galatea (mythological statue)|Galatea]] and [[Pandora]]).{{sfn|McCorduck|2004|pp=4–5}}
|-
| [[Cult image|Sacred mechanical statues]] built in [[ancient Egypt|Egypt]] and [[ancient Greece|Greece]] were believed to be capable of wisdom and emotion. [[Hermes Trismegistus]] would write "they have ''sensus'' and ''spiritus'' ... by discovering the true nature of the gods, man has been able to reproduce it."{{sfn|McCorduck|2004|p=4-5}}
|-
! 10th century BC
| Yan Shi presented [[King Mu of Zhou]] with mechanical men which were capable of moving their bodies independently.{{sfn|Needham|1986|p=53}}
|-
! 384 BC–322 BC
| [[Aristotle]] described the [[syllogism]], a method of formal, mechanical thought in the ''[[Organon]]''.<ref>{{Cite book|title=The Organon|publisher = Random House with Oxford University Press |year=1941 |editor=Richard McKeon |editor-link = Richard McKeon}}</ref><ref>{{Cite journal|title=Aristotle Writing Science: An Application of His Theory|journal = Journal of Technical Writing and Communication|volume = 46|pages = 83–104|last=Giles|first=Timothy|doi=10.1177/0047281615600633|year = 2016|s2cid = 170906960}}</ref>{{sfn|Russell|Norvig|2021|p=6}} Aristotle also described [[means–ends analysis]] (an algorithm for [[automated planning and scheduling|planning]]) in ''[[Nicomachean Ethics]]'', the same algorithm used by [[Allen Newell|Newell]] and [[Herbert A. Simon|Simon]]'s [[General Problem Solver]] (1959).{{sfn|Russell|Norvig|2021|p=7}}
|-
!3rd century BC
|[[Ctesibius]] invents a mechanical water clock with an alarm. This was the first example of a feedback mechanism.{{citation needed|date=October 2022}}
|-
! 1st century
| [[Hero of Alexandria]] created mechanical men and other [[automaton]]s.<ref>{{Harvnb|McCorduck|2004|p=6}}</ref> He produced what may have been "the world's first practical programmable machine:"{{sfn|Schmidhuber|2022}} an automatic theatre.
|-
! 260
| [[Porphyry (philosopher)|Porphyry]] wrote ''Isagogê'' which categorized knowledge and logic, including a drawing of what would later be called a "[[semantic net]]".{{sfn|Russell|Norvig|2021|p=341}}
|-
! ~800
| [[Jabir ibn Hayyan]] developed the [[Alchemy and chemistry in Islam|Arabic alchemical]] theory of ''[[Takwin]]'', the artificial creation of life in the laboratory, up to and including [[human]] life.<ref>{{Citation |author=O'Connor, Kathleen Malone |title=The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam |publisher=[[University of Pennsylvania]] |year=1994 |pages=1–435 |url=http://repository.upenn.edu/dissertations/AAI9503804 |access-date=10 January 2007 |postscript=. |archive-date=5 December 2019 |archive-url=https://web.archive.org/web/20191205222650/https://repository.upenn.edu/dissertations/AAI9503804/ |url-status=live }}</ref>
|-
! rowspan = 2 | 9th Century
| The [[Banū Mūsā brothers]] created a [[Computer program|programmable]] music automaton described in their ''Book of Ingenious Devices:'' a steam-driven flute controlled by a program represented by pins on a revolving cylinder.<ref>{{cite book |last1=|editor1-last=Hill |editor1-first=Donald R. |editor1-link = Donald Hill |title=The Book of Ingenious Devices |date=1979 |publisher=D. Reidel |___location=Dortrecht, Netherlands; Boston; London |isbn=978-90277-0-833-5 |url=https://books.google.com/books?id=HSL2CAAAQBAJ |ref=none |chapter=|orig-year=9th century}}</ref> This was "perhaps the first machine with a [[stored program]]".{{sfn|Schmidhuber|2022}}
|-
| [[al-Khwarizmi]] wrote textbooks with precise step-by-step methods for arithmetic and algebra, used in Islam, India and Europe until the 16th century. The word "[[algorithm]]" is derived from his name.{{sfn|Russell|Norvig|2021|p=9}}
|-
! 1206
| [[Ismail al-Jazari]] created a [[Computer program|programmable]] orchestra of mechanical human beings.<ref>[http://www.shef.ac.uk/marcoms/eview/articles58/robot.html A Thirteenth Century Programmable Robot] {{webarchive |url=https://web.archive.org/web/20071219095203/http://www.shef.ac.uk/marcoms/eview/articles58/robot.html |date=19 December 2007 }}</ref>
|-
! 1275
| [[Ramon Llull]], Mallorcan [[theology|theologian]], invents the ''[[Ars Magna (Ramon Llull)|Ars Magna]]'', a tool for combining concepts mechanically based on an [[Islamic astrology|Arabic astrological]] tool, the [[Zairja]]. Llull described his machines as mechanical entities that could combine basic truth and facts to produce advanced knowledge. The method would be developed further by [[Gottfried Wilhelm Leibniz]] in the 17th century.<ref>{{Harvnb|McCorduck|2004|pp=10–12, 37}}; {{Harvnb|Russell|Norvig|2021|p=6}}</ref>
|-
! ~1500
| [[Paracelsus]] claimed to have created an artificial man out of magnetism, sperm and alchemy.{{sfn|McCorduck|2004|pp=13–14}}
|-
! ~1580
| Rabbi [[Judah Loew ben Bezalel]] of [[Prague]] is said to have invented the [[Golem]], a clay man brought to life.<ref>{{Harvnb|McCorduck|2004|pp=14–15}}, {{Harvnb|Buchanan|2005|p=50}}</ref>
|}
== 1600–1900 ==
{| class="wikitable"
|-
! Date
! Development
|-
! 1620
| [[Francis Bacon]] developed empirical theory of knowledge and introduced inductive logic in his work [[Novum Organum]], a play on [[Aristotle]]'s title [[Organon]].<ref>{{Cite book|title=The New Organon: Novem Organum Scientiarum |year=1620 |author=Sir [[Francis Bacon]]}}</ref><ref>{{Cite book|title=Francis Bacon: The New Organon (Cambridge Texts in the History of Philosophy) |year=2000 |author=Sir [[Francis Bacon]] |publisher=Cambridge University Press}}</ref>{{sfn|Russell|Norvig|2021|p=6}}
|-
! 1623
| [[Wilhelm Schickard]] drew a calculating clock on a letter to [[Kepler]]. This will be the first of five unsuccessful attempts at designing a ''direct entry'' calculating clock in the 17th century (including the designs of [[Tito Livio Burattini|Tito Burattini]], [[Samuel Morland]] and [[René Grillet de Roven|René Grillet]]).{{efn|Please see [[Mechanical calculator#Other calculating machines]]}}
|-
! 1641
| [[Thomas Hobbes]] published ''[[Leviathan (Hobbes book)|Leviathan]]'' and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".<ref>{{Harvnb|Russell|Norvig|2021|p=6}}</ref>{{sfn|McCorduck|2004|p=42}}
|-
! 1642
| [[Blaise Pascal]] invented a [[mechanical calculator]],{{efn|Please see: [[Pascal's calculator#Competing designs]]}} the first [[Digital data|digital]] [[Pascal's calculator|calculating machine]].<ref>{{Harvnb|Russell|Norvig|2021|p=6}}; {{Harvnb|McCorduck|2004|p=26}}</ref>
|-
! 1647
| [[René Descartes]] proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").<ref>{{Harvnb|Russell|Norvig|2021|p=6}}; {{Harvnb|McCorduck|2004|pp=36–40}}</ref>
|-
! 1654
| [[Blaise Pascal]] described how to find [[expected value]]s in probability, in 1662 [[Antoine Arnauld]] published a formula to find the maximum [[expected value]], and in 1663, [[Gerolamo Cardano]]'s solution to the same problems is published 116 years after it was written. The theory of probability is further developed by [[Jacob Bernoulli]] and [[Pierre-Simon Laplace]] in the 18th century.{{sfn|Russell|Norvig|2021|p=8}} Probability theory would become central to AI and machine learning from the 1990s onward.
|-
! 1672
| [[Gottfried Wilhelm Leibniz]] improved the earlier machines, making the [[Stepped Reckoner]] to do [[multiplication]] and [[Division (mathematics)|division]].{{sfn|McCorduck|2004|pp=41–42}}
|-
! 1676
| [[Gottfried Wilhelm Leibniz|Leibniz]] derived the [[chain rule]].<ref name="leibniz1676">{{Cite book|last=Leibniz|first=Gottfried Wilhelm Freiherr von|url=https://books.google.com/books?id=bOIGAAAAYAAJ&q=leibniz+altered+manuscripts&pg=PA90|title=The Early Mathematical Manuscripts of Leibniz: Translated from the Latin Texts Published by Carl Immanuel Gerhardt with Critical and Historical Notes (Leibniz published the chain rule in a 1676 memoir)|date=1920|publisher=Open court publishing Company|isbn=9780598818461 |language=en}}</ref> The rule is used by AI to train neural networks, for example the [[backpropagation]] algorithm uses the chain rule.{{sfn|Schmidhuber|2022}}
|-
! 1679
| [[Gottfried Wilhelm Leibniz|Leibniz]] developed a universal calculus of reasoning ([[alphabet of human thought]]) by which arguments could be decided mechanically. It assigned a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.<ref>{{Harvnb|Russell|Norvig|2021|p=6}}; {{Harvnb|McCorduck|2004|pp=41–42}}</ref>
|-
! 1726
| [[Jonathan Swift]] published ''[[Gulliver's Travels]]'', which includes this description of [[the Engine]], a machine on the island of [[Gulliver's Travels#Part III: A Voyage to Laputa, Balnibarbi, Luggnagg, Glubbdubdrib and Japan|Laputa]]: "a Project for improving speculative Knowledge by practical and mechanical Operations" by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."<ref>Quoted in {{Harvnb|McCorduck|2004|p=317}}</ref> The machine is a parody of ''[[Ars Magna (Ramon Llull)|Ars Magna]]'', one of the inspirations of [[Gottfried Wilhelm Leibniz]]' mechanism.
|-
! 1738
| [[Daniel Bernoulli]] introduces the concept of "[[utility (economics)|utility]]", a generalization of probability, the basis of [[economics]] and [[decision theory]], and the mathematical foundation for the way AI represents the "goals" of [[intelligent agent]]s.{{sfn|Russell|Norvig|2021|p=10}}
|-
! 1739
| [[David Hume]] described [[Inductive reasoning|induction]], the logical method of learning generalities from examples.{{sfn|Russell|Norvig|2021|p=6}}
|-
! 1750
| [[Julien Offray de La Mettrie]] published ''[[L'Homme Machine]]'', which argued that human thought is strictly mechanical.{{sfn|McCorduck|2004|pp=43}}
|-
!1763
|[[Thomas Bayes]]'s work ''[[An Essay Towards Solving a Problem in the Doctrine of Chances]]'', published two years after his death, laid the foundations of [[Bayes' theorem]] and used in modern AI in [[Bayesian networks]].{{sfn|Russell|Norvig|2021|p=8}}
|-
! 1769
| [[Wolfgang von Kempelen]] built and toured with his [[chess]]-playing [[automaton]], [[Mechanical Turk|The Turk]], which Kempelen claimed could defeat human players.{{sfn|McCorduck|2004|p=17}} The Turk was later shown to be a [[hoax]], involving a human chess player.
|-
! 1795–1805
| The simplest kind of [[artificial neural network]] is the linear network. It has been known for over two centuries as the [[method of least squares]] or [[linear regression]]. It was used as a means of finding a good rough linear fit to a set of points by [[Adrien-Marie Legendre]] (1805)<ref>{{Cite book |last=Adrien-Marie Legendre |url=http://archive.org/details/bub_gb_FRcOAAAAQAAJ |title=Nouvelles méthodes pour la détermination des orbites des comètes |date=1805 |publisher=F. Didot |others=Ghent University |language=French}}</ref> and [[Carl Friedrich Gauss]] (1795)<ref name="gauss1795">{{cite journal |first=Stephen M. |last=Stigler |year=1981 |title=Gauss and the Invention of Least Squares |journal=Ann. Stat. |volume=9 |issue=3 |pages=465–474 |doi=10.1214/aos/1176345451 |doi-access=free }}</ref> for the prediction of planetary movement.{{sfn|Schmidhuber|2022}}<ref>
{{cite book |last = Stigler
|first = Stephen M.
|author-link = Stephen Stigler
|year = 1986
|title = The History of Statistics: The Measurement of Uncertainty before 1900
|___location = Cambridge
|publisher = Harvard
|isbn = 0-674-40340-1
|url-access = registration
|url = https://archive.org/details/historyofstatist00stig
}}</ref>
|-
!1800
|[[Joseph Marie Jacquard]] created a [[Computer program|programmable]] loom, based on earlier inventions by [[Basile Bouchon]] (1725), Jean-Baptiste Falcon (1728) and [[Jacques Vaucanson]] (1740).<ref>{{Harvtxt|Russell|Norvig|2021|p=15}}; [[#RAZY|Razy, C.]] (1913), p.120.</ref> Replaceable [[punched cards]] controlled sequences of operations in the process of manufacturing [[textile]]s. This may have been the first industrial software for [[commercial enterprise]]s.{{sfn|Schmidhuber|2022}}
|-
! 1818
| [[Mary Shelley]] published the story of ''[[Frankenstein|Frankenstein; or the Modern Prometheus]]'', a fictional consideration of the ethics of creating [[sentience|sentient]] beings.{{sfn|McCorduck|2004|pp=19–25}}
|-
! 1822–1859
| [[Charles Babbage]] & [[Ada Lovelace]] worked on [[Difference engine|programmable mechanical calculating machines]].<ref>{{Harvnb|Russell|Norvig|2021|p=15}}; {{Harvnb|McCorduck|2004|pp=26–34}}</ref>
|-
! 1837
|The mathematician [[Bernard Bolzano]] made the first modern attempt to formalize [[semantics]].<ref>{{Cite journal|last=Cambier|first=Hubert|date=June 2016|title=The Evolutionary Meaning of World 3|journal=Philosophy of the Social Sciences|volume=46|issue=3|pages=242–264|doi=10.1177/0048393116641609|s2cid=148093595|issn=0048-3931}}</ref>
|-
! 1854
| [[George Boole]] set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the [[Symbolic language (mathematics)|symbolic language]] of a calculus", inventing [[Boolean algebra (logic)|Boolean algebra]].<ref>{{Harvnb|Russell|Norvig|2021|p=8}}; {{Harvnb|McCorduck|2004|pp=48–51}}</ref>
|-
! 1863
| [[Samuel Butler (novelist)|Samuel Butler]] suggested that [[Charles Darwin|Darwinian]] [[evolution]] also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.<ref>[[Project Gutenberg]] eBook [https://www.gutenberg.org/ebooks/1906 Erewhon by Samuel Butler.Poes.....] {{Webarchive|url=https://web.archive.org/web/20210430114133/https://www.gutenberg.org/ebooks/1906 |date=30 April 2021 }}</ref>
|}
==
[[File:AI-History-Timeline-300dpi.jpg|thumb|AI history timeline image covering the most important events from 1900 to 2025]]
===1901–1950===
{{More citations needed section|date=February 2018}}
{| class="wikitable
|-
!Date
! Development
|-
! 1910-1913
| [[Bertrand Russell]] and [[Alfred North Whitehead]] published ''[[Principia Mathematica]],'' which showed that all of elementary mathematics could be reduced to mechanical reasoning in [[formal logic]].{{sfn|Linsky|Irvine|2022|p=2}}
|-
! 1912-1914
| [[Leonardo Torres Quevedo]] built an automaton for chess endgames, [[El Ajedrecista]]. He was called "the 20th century's first AI pioneer".{{sfn|Schmidhuber|2022}} In his ''Essays on Automatics'' (1914), Torres published speculation about thinking and automata and introduced the idea of [[floating-point arithmetic]].<ref>{{Harvnb|McCorduck|2004|pp=59–60}}</ref><ref name=RANDELL>{{cite web|url=http://www.cs.ncl.ac.uk/publications/articles/papers/398.pdf |title=From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush |last1=Randell |first1=Brian |author-link1=Brian Randell |access-date=9 September 2013 |url-status=dead |archive-url=https://web.archive.org/web/20130921055055/http://www.cs.ncl.ac.uk/publications/articles/papers/398.pdf |archive-date=21 September 2013 }}</ref>
|-
! 1923
| [[Karel Čapek]]'s play ''[[R.U.R.]] (Rossum's Universal Robots)'' opened in London. This is the first use of the word "[[robot]]" in English.<ref>{{Harvnb|McCorduck|2004|p=25}}</ref>
|-
! 1920–1925
| [[Wilhelm Lenz]] and [[Ernst Ising]] created and analyzed the [[Ising model]] (1925)<ref name="brush67">{{cite journal |doi=10.1103/RevModPhys.39.883|title=History of the Lenz-Ising Model|year=1967|last1=Brush|first1=Stephen G.|journal=Reviews of Modern Physics|volume=39|issue=4|pages=883–893|bibcode=1967RvMP...39..883B}}</ref> which can be viewed as the first artificial [[recurrent neural network]] (RNN) consisting of neuron-like threshold elements.{{sfn|Schmidhuber|2022}} In 1972, [[Shun'ichi Amari]] made this architecture adaptive.<ref name="Amari1972">{{cite journal |last1=Amari |first1=Shun-Ichi |title=Learning patterns and pattern sequences by self-organizing nets of threshold elements|journal= IEEE Transactions |date=1972 |volume=C |issue=21 |pages=1197–1206 }}</ref>{{sfn|Schmidhuber|2022}}
|-
! 1920s and 1930s
| [[Ludwig Wittgenstein]]'s ''[[Tractatus Logico-Philosophicus]]'' (1921) inspires [[Rudolf Carnap]] and the [[Logical positivism|logical positivists]] of the [[Vienna Circle]] to use formal logic as the foundation of philosophy. However, Wittgenstein's later work in the 1940s demonstrates that context free symbolic logic is incoherent without human interpretation.
|-
! 1931
| [[Kurt Gödel]] encoded mathematical statements and proofs as integers, and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus "he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI,"{{sfn|Schmidhuber|2022}} laying foundations of [[theoretical computer science]] and AI theory.
|-
!1935
|[[Alonzo Church]] extended Gödel's proof and showed that the [[decision problem]] of computer science does not have a general solution.<ref>{{cite journal |first=A. |last=Church |author-link=Alonzo Church |title=An unsolvable problem of elementary number theory (first presented on 19 April 1935 to the American Mathematical Society) |journal=American Journal of Mathematics |volume=58 |number=2 |year=1936 |pages=345–363 |doi=10.2307/2371045 |jstor=2371045 }}</ref> He developed the [[Lambda calculus]], which will eventually be fundamental to the theory of computer languages.
|-
! 1936
| [[Konrad Zuse]] filed his patent application for a program-controlled computer.<ref>K. Zuse (1936). Verfahren zur selbsttätigen Durchführung von Rechnungen mit Hilfe von Rechenmaschinen. Patent application Z 23 139 / GMD Nr. 005/021, 1936.</ref>
|-
!1937
|[[Alan Turing]] published "[[Turing's proof|On Computable Numbers]]",<ref>{{cite journal |last1=Turing |first1=Alan Mathison |date=November 12, 1936 |title=On computable numbers, with an application to the Entscheidungsproblem |url=https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf |journal=Proceedings of the London Mathematical Society |volume=58 |pages=230–265 |doi=10.1112/plms/s2-42.1.230 |s2cid=73712}}</ref> which laid the foundations of the modern [[theory of computation]] by introducing the [[Turing machine]], a physical interpretation of "computability". He used it to confirm Gödel by proving that the [[halting problem]] is [[Undecidable problem|undecidable]].
|-
! 1940
| [[Edward Condon]] displayed [[Nimatron]], a digital machine that played [[Nim]] perfectly.
|-
! 1941
| [[Konrad Zuse]] built the first working program-controlled general-purpose computer.<ref>{{Harvnb|McCorduck|2004|pp=61–62}} and see also [https://web.archive.org/web/20100418164050/http://www.epemag.com/zuse The Life and Work of Konrad Zuse]</ref>
|-
! rowspan=2 | 1943
| [[Warren Sturgis McCulloch]] and [[Walter Pitts]] publish "A Logical Calculus of the Ideas Immanent in Nervous Activity", the first mathematical description of an [[artificial neural network]]s.<ref>{{Harvtxt|McCorduck|2004|pp=55–56}}; {{Harvtxt|Russell|Norvig|2021|p=17}}</ref>
|-
| [[Arturo Rosenblueth]], [[Norbert Wiener]] and Julian Bigelow coin the term "[[cybernetics]]". Wiener's popular book by that name published in 1948.
|-
! rowspan=2 | 1945
| [[Game theory]] which would prove invaluable in the progress of AI was introduced with the 1944 paper "[[Theory of Games and Economic Behavior]]" by [[mathematician]] [[John von Neumann]] and [[economist]] [[Oskar Morgenstern]].
|-
| [[Vannevar Bush]] published "[[As We May Think]]" ([[The Atlantic Monthly]], July 1945) a prescient vision of the future in which computers assist humans in many activities.
|-
! rowspan=2 |1948
| [[Alan Turing]] produces "Intelligent Machinery" report, regarded as the first manifesto of Artificial Intelligence. It introduces many concepts including the logic-based approach to problem solving, that intellectual activity consists mainly of various kinds of search, and a discussion of machine learning in which he anticipates the [[Connectionism]] AI approach.<ref name="turing">{{Cite book |last= Copeland|first= J (Ed.)|title= The Essential Turing: the ideas that gave birth to the computer age|publisher= Oxford: Clarendon Press|year=2004|isbn=0-19-825079-7}}</ref>
|-
| [[John von Neumann]] (quoted by [[Edwin Thompson Jaynes]]) in response to a comment at a lecture that it was impossible for a machine (at least ones created by humans) to think: "You insist that there is something a machine cannot do. If you will tell me ''precisely'' what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the [[Church–Turing thesis]] which states that any effective procedure can be simulated by a (generalized) computer.
|-
! 1949
| [[Donald O. Hebb]] develops [[Hebbian theory]], a possible algorithm for learning in [[neural networks]].{{sfn|Russell|Norvig|2021|p=17}}
|}
===1950s===
{| class="wikitable"
|-
!Date
! Development
|-
! rowspan=3 | 1950
| [[Alan Turing]] published "[[Computing Machinery and Intelligence]]", which proposes the [[Turing test]] as a measure of machine intelligence and answered all of the most common objections to the proposal "machines can think".<ref>{{Harvtxt|Crevier| 1993|pp=22–25}}; {{Harvtxt|Russell|Norvig|2021|pp=18–19}}</ref>
|-
|-
|-
! 1951
| The first working AI programs were written in 1951 to run on the [[Ferranti Mark 1]] machine of the [[University of Manchester]]: A checkers-playing program written by [[Christopher Strachey]] and a chess-playing program written by [[Dietrich Prinz]].{{sfn|Russell|Norvig|2021|p=17}}
|-
! 1952–1962
| [[Arthur Samuel (computer scientist)|Arthur Samuel]] ([[IBM]]) wrote the first game-playing program, for checkers ([[draughts]]), to achieve sufficient skill to challenge a respectable amateur.<ref>{{Harvtxt|Samuel|1959}}; {{Harvtxt|Russell|Norvig|2021|p=17}}
</ref> His first checkers-playing program was written in 1952, and in 1955 he created a version that [[machine learning|learned]] to play.{{sfn|Russell|Norvig|2021|p=19}}<ref>
Schaeffer, Jonathan. ''One Jump Ahead:: Challenging Human Supremacy in Checkers'', 1997, 2009, Springer, {{ISBN|978-0-387-76575-4}}. Chapter 6.
</ref>
|-
! rowspan=2 | 1956
| The [[Dartmouth College]] [[Dartmouth workshop|summer AI conference]] is organized by [[John McCarthy (computer scientist)|John McCarthy]], [[Marvin Minsky]], [[Nathaniel Rochester (computer scientist)|Nathan Rochester]] of [[International Business Machines|IBM]] and [[Claude Shannon]]. McCarthy coins the term ''artificial intelligence'' for the conference.<ref>
{{Harvtxt|Russell|Norvig|2021|p=18}}</ref><ref>
{{cite news|last1=Novet|first1=Jordan|title=Everyone keeps talking about A.I.—here's what it really is and why it's so hot now|url=https://www.cnbc.com/2017/06/17/what-is-artificial-intelligence.html|access-date=16 February 2018|work=CNBC|date=17 June 2017|archive-date=16 February 2018|archive-url=https://web.archive.org/web/20180216204448/https://www.cnbc.com/2017/06/17/what-is-artificial-intelligence.html|url-status=live}}</ref>
|-
| The first demonstration of the [[Logic Theorist]] (LT) written by [[Allen Newell]], [[Cliff Shaw]] and [[Herbert A. Simon]] ([[Carnegie Institute of Technology]], now [[Carnegie Mellon University]] or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim. This program has been described as the first deliberately engineered to perform automated reasoning, and would eventually prove 38 of the first 52 theorems in [[Bertrand Russell|Russell]] and [[Alfred North Whitehead|Whitehead]]'s ''[[Principia Mathematica]]'', and find new and more elegant proofs for some.<ref>{{Harvnb|McCorduck|2004|pp=123–125}}, {{Harvnb|Crevier|1993|pp=44–46}} and {{Harvnb|Russell|Norvig|2021|p=18}}</ref> Simon said that they had "solved the venerable [[mind–body problem]], explaining how a system composed of matter can have the properties of mind".<ref>Quoted in {{Harvnb|Crevier|1993|p=46}} and {{Harvnb|Russell|Norvig|2021|p=18}}</ref>
|-
! rowspan=3 | 1958
| John McCarthy ([[Massachusetts Institute of Technology]] or MIT) invented the [[Lisp (programming language)|Lisp programming language]].{{sfn|Russell|Norvig|2021|p=19}}
|-
| [[Herbert Gelernter]] and [[Nathaniel Rochester (computer scientist)|Nathan Rochester]] (IBM) described a [[Proof assistant|theorem prover]] in [[geometry]].{{sfn|Russell|Norvig|2021|p=19}} It exploited a semantic model of the ___domain in the form of diagrams of "typical" cases.{{citation needed|date=August 2023}}
|-
| [[Teddington Conference]] on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's "Programs with Common Sense" (which proposed the [[Advice taker]] application as a primary research goal){{sfn|Russell|Norvig|2021|p=19}} [[Oliver Selfridge]]'s "Pandemonium", and [[Marvin Minsky]]'s "Some Methods of [[Heuristic (computer science)|Heuristic]] Programming and Artificial Intelligence".
|-
! rowspan=2 | 1959
| The [[General Problem Solver]] (GPS) was created by Newell, Shaw and Simon while at CMU.{{sfn|Russell|Norvig|2021|p=19}}
|-
| [[John McCarthy (computer scientist)|John McCarthy]] and [[Marvin Minsky]] founded the [[MIT Computer Science and Artificial Intelligence Laboratory|MIT AI Lab]].{{sfn|Russell|Norvig|2021|p=19}}
|-
! Late 1950s, early 1960s
| [[Margaret Masterman]] and colleagues at [[University of Cambridge]] design [[lexical semantics|semantic net]]s for [[machine translation]].{{citation needed|reason=notability, undue weight|date=August 2023}}<!-- especially as to notability. Semantic nets were proposed earlier and developed later. Is this undue weight?-->
|}
===1960s===
{{
{| class="wikitable
|-
!Date
! Development
|-
! 1960s
| [[Ray Solomonoff]] lays the foundations of a [[mathematical theory]] of AI, introducing universal [[Bayesian method]]s for inductive inference and prediction.
|-
! 1960
| "[[Man-Computer Symbiosis]]" by J.C.R. Licklider.
|-
! rowspan=3 | 1961
| James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic [[integral|integration]] program, SAINT, which solved [[calculus]] problems at the college freshman level.
|-
|-
|
|-
! rowspan=3 | 1963
| Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same [[analogy]] problems as are given on [[Intelligence quotient|IQ]] tests.
|-
| [[Edward Feigenbaum]] and [[Julian Feldman]] published ''Computers and Thought'', the first collection of articles about artificial intelligence.<ref>{{cite book |editor1-last=Feigenbaum |editor1-first=Edward |editor2-last=Feldman |editor2-first=Julian |title=Computers and thought : a collection of articles |date=1963 |publisher=McGraw-Hill |___location=New York |edition=1 |oclc=593742426 }}</ref><ref>{{cite web |title=This week in The History of AI at AIWS.net – Edward Feigenbaum and Julian Feldman published "Computers and Thought" |url=https://aiws.net/the-history-of-ai/this-week-in-the-history-of-ai-at-aiws-net-edward-feigenbaum-and-julian-feldman-published-computers-and-thought-2/ |website=AIWS.net |access-date=5 May 2022 |archive-date=24 April 2022 |archive-url=https://web.archive.org/web/20220424120050/https://aiws.net/the-history-of-ai/this-week-in-the-history-of-ai-at-aiws-net-edward-feigenbaum-and-julian-feldman-published-computers-and-thought-2/ |url-status=live }}</ref><ref>{{cite web |title=Feigenbaum & Feldman Issue "Computers and Thought," the First Anthology on Artificial Intelligence |url=https://www.historyofinformation.com/detail.php?entryid=4329 |website=History of Information |access-date=5 May 2022 |archive-date=5 May 2022 |archive-url=https://web.archive.org/web/20220505145849/https://www.historyofinformation.com/detail.php?entryid=4329 |url-status=live }}</ref><ref>{{cite book |last1=Feigenbaum |first1=Edward A. |last2=Feldman |first2=Julian |title=Computers and Thought |url=https://dl.acm.org/doi/book/10.5555/601134 |via=[[Association for Computing Machinery]] Digital Library |publisher=McGraw-Hill, Inc. |access-date=5 May 2022 |date=1963 |isbn=9780070203709 |archive-date=5 May 2022 |archive-url=https://web.archive.org/web/20220505145849/https://dl.acm.org/doi/book/10.5555/601134 |url-status=live }}</ref>
|-
|-
! rowspan=2 | 1964
| Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, [[Project MAC]]), shows that computers can understand natural language well enough to solve [[algebra]] [[word problem (mathematics education)|word problems]] correctly.
|-
|-
! rowspan=5 | 1965
| [[Alexey Ivakhnenko]] and Valentin Lapa developed the first [[deep learning]] algorithm for [[multilayer perceptron]]s in [[Soviet Union]].<ref name="ivak1965">{{cite book|url={{google books |plainurl=y |id=FhwVNQAACAAJ}}|title=Cybernetic Predicting Devices|last=Ivakhnenko|first=A. G.|publisher=CCM Information Corporation|year=1973}}</ref><ref name="ivak1967">{{cite book|url={{google books |plainurl=y |id=rGFgAAAAMAAJ}}|title=Cybernetics and forecasting techniques|last2=Grigorʹevich Lapa|first2=Valentin|publisher=American Elsevier Pub. Co.|year=1967|first1=A. G.|last1=Ivakhnenko}}</ref>{{sfn|Schmidhuber|2022}}
|-
| [[Lotfi A. Zadeh]] at U.C. Berkeley publishes his first paper introducing [[fuzzy logic]], "Fuzzy Sets" (Information and Control 8: 338–353).
|-
| J. Alan Robinson invented a mechanical [[mathematical proof|proof]] procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language.
|-
| [[Joseph Weizenbaum]] (MIT) built [[ELIZA]], an [[interactive program]] that carries on a dialogue in [[English language]] on any topic. It was a popular toy at AI centers on the [[ARPANET]] when a version that "simulated" the dialogue of a [[psychotherapy|psychotherapist]] was programmed.
|-
| [[Edward Feigenbaum]] initiated [[Dendral]], a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first [[expert system]].
|-
! rowspan=4 | 1966
| Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated [[Semantic network|semantic nets]].
|-
| Machine Intelligence<ref>{{cite web |url=http://www.cs.york.ac.uk/mlg/MI/mi.html |title=The Machine Intelligence series |website=www.cs.york.ac.uk |url-status=dead |archive-url=https://web.archive.org/web/19991105213013/http://www.cs.york.ac.uk/mlg/MI/mi.html |archive-date=1999-11-05}}</ref> workshop at Edinburgh – the first of an influential annual series organized by [[Donald Michie]] and others.
|-
| Negative report on machine translation kills much work in [[natural language processing]] (NLP) for many years.
|-
| [[Dendral]] program (Edward Feigenbaum, [[Joshua Lederberg]], Bruce Buchanan, Georgia Sutherland at [[Stanford University]]) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning.
|-
! 1967
| [[Shun'ichi Amari]] was the first to use [[stochastic gradient descent]] for [[deep learning]] in [[multilayer perceptron]]s.<ref name="Amari1967">{{cite journal |last1=Amari |first1=Shun'ichi |author-link=Shun'ichi Amari|title=A theory of adaptive pattern classifier|journal= IEEE Transactions |date=1967 |volume=EC |issue=16 |pages=279–307}}</ref> In computer experiments conducted by his student Saito, a five layer MLP with two modifiable layers learned useful [[Knowledge representation|internal representations]] to classify non-linearily separable pattern classes.{{sfn|Schmidhuber|2022}}
|-
! rowspan=3 | 1968
| [[Joel Moses]] (PhD work at MIT) demonstrated the power of [[symbolic mathematics|symbolic reasoning]] for integration problems in the [[Macsyma]] program. First successful knowledge-based program in [[mathematics]].
|-
| [[Richard Greenblatt (programmer)]] at MIT built a knowledge-based [[computer chess|chess-playing program]], [[Mac Hack]], that was good enough to achieve a class-C rating in tournament play.
|-
| Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian [[minimum message length]] criterion, a mathematical realisation of [[Occam's razor]].
|-
! rowspan=6 | 1969
| [[SRI International|Stanford Research Institute]] (SRI): [[Shakey the robot]], demonstrated combining [[animal locomotion]], [[perception]] and [[problem solving]].
|-
| [[Roger Schank]] (Stanford) defined [[concept]]ual dependency model for [[natural language understanding]]. Later developed (in PhD dissertations at [[Yale University]]) for use in story understanding by [[Robert Wilensky]] and Wendy Lehnert, and for use in understanding memory by Janet Kolodner.
|-
| Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge.
|-
| First International Joint Conference on Artificial Intelligence ([[IJCAI]]) held at Stanford.
|-
| Marvin Minsky and [[Seymour Papert]] publish ''[[Perceptrons (book)|Perceptron]]s'', demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the [[AI winter]] of the 1970s, a failure of confidence and funding for AI. However, by the time the book came out, methods for training [[multilayer perceptrons]] by [[deep learning]] were already known ([[Alexey Ivakhnenko]] and Valentin Lapa, 1965; [[Shun'ichi Amari]], 1967).{{sfn|Schmidhuber|2022}} Significant progress in the field continued (see below).
|-
| McCarthy and Hayes started the discussion about the [[frame problem]] with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".
|}
===1970s===
{{
{| class="wikitable
|-
!Date
! Development
|-
! Early 1970s
| Jane Robinson and Don Walker established an influential [[Natural Language Processing]] group at SRI.<ref>{{Cite journal |url=https://direct.mit.edu/coli/article/41/4/723/1509/Jane-J-Robinson |access-date=2024-01-23 |journal=Computational Linguistics|doi=10.1162/COLI_a_00235 |title=Jane J. Robinson |date=2015 |last1=Grosz |first1=Barbara J. |last2=Hajicova |first2=Eva |last3=Joshi |first3=Aravind |volume=41 |issue=4 |pages=723–726 |doi-access=free }}</ref>
|-
! rowspan=4 | 1970
| [[Seppo Linnainmaa]] publishes the reverse mode of [[automatic differentiation]]. This method became later known as [[backpropagation]], and is heavily used to train [[artificial neural networks]].<ref>Linnainmaa, Seppo (1970). ''Algoritmin kumulatiivinen pyöristysvirhe yksittäisten pyöristysvirheiden Taylor-kehitelmänä'' [''The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors''] (PDF) (Thesis) (in Finnish). pp. 6–7.</ref>
|-
| Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for [[computer assisted instruction]] based on semantic nets as the representation of knowledge.
|-
| Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding.
|-
| [[Patrick Winston]]'s PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.
|-
! rowspan=2 | 1971
| [[Terry Winograd]]'s PhD thesis ([[MIT]]) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, [[SHRDLU]], with a robot arm that carried out instructions typed in English.
|-
| Work on the Boyer-Moore theorem prover started in [[Edinburgh]].<ref>{{cite web|url=http://www.cs.utexas.edu/users/moore/best-ideas/nqthm/|title=The Boyer-Moore Theorem Prover|access-date=15 March 2015|archive-date=23 September 2015|archive-url=https://web.archive.org/web/20150923223027/http://www.cs.utexas.edu/users/moore/best-ideas/nqthm/|url-status=live}}</ref>
|-
! rowspan=2 | 1972
| [[Prolog]] programming language developed by [[Alain Colmerauer]].
|-
| Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS.
|-
! rowspan=2 | 1973
| The Assembly Robotics Group at [[University of Edinburgh]] builds Freddy Robot, capable of using [[visual perception]] to locate and assemble models. (See [[Freddy II|Edinburgh ''Freddy'' Assembly Robot]]: a versatile computer-controlled assembly system.)
|-
| The [[Lighthill report]] gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities.
|-
! 1974
| [[Edward H. Shortliffe|Ted Shortliffe]]'s PhD dissertation on the [[MYCIN]] program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of [[expert system]] development, especially commercial systems.
|-
! rowspan=4 | 1975
| Earl Sacerdoti developed techniques of [[partial-order planning]] in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems.
|-
| [[Austin Tate]] developed the Nonlin hierarchical planning system able to search a space of [[partial plan]]s characterised as alternative approaches to the underlying goal structure of the plan.
|-
| Marvin Minsky published his widely read and influential article on [[Frame problem|Frames]] as a representation of knowledge, in which many ideas about [[Schema (psychology)|schemas]] and [[semantic link]]s are brought together.
|-
| The Meta-Dendral learning program produced new results in [[chemistry]] (some rules of [[mass spectrometry]]) the first scientific discoveries by a computer to be published in a refereed journal.
|-
! rowspan=2 | Mid-1970s
| [[Barbara Grosz]] (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, [[Bonnie Webber]] and [[Candace Sidner]] developed the notion of "centering", used in establishing focus of [[discourse]] and anaphoric references in [[Natural language processing]].
|-
| [[David Marr (psychologist)|David Marr]] and [[MIT]] colleagues describe the "primal sketch" and its role in [[visual perception]].
|-
! rowspan=3 | 1976
| [[Douglas Lenat]]'s [[Automated Mathematician|AM program]] (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures).
|-
| Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.
|-
|Stevo Bozinovski and Ante Fulgosi introduced transfer learning method in artificial intelligence, based on the psychology of learning.<ref>Stevo Bozinovski and Ante Fulgosi (1976). "The influence of pattern similarity and transfer learning upon training of a base perceptron" (original in Croatian) Proceedings of Symposium Informatica 3-121-5, Bled.</ref><ref>Stevo Bozinovski (2020) "Reminder of the first paper on transfer learning in neural networks, 1976". Informatica 44: 291–302.</ref>
|-
! rowspan=3 | 1978
| [[Tom M. Mitchell|Tom Mitchell]], at Stanford, invented the concept of [[Version space]]s for describing the [[Candidate solution|search space]] of a concept formation program.
|-
| [[Herbert A. Simon]] wins the [[Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel|Nobel Prize in Economics]] for his theory of [[bounded rationality]], one of the cornerstones of AI known as "[[satisficing]]".
|-
| The MOLGEN program, written at [[Stamford, Connecticut|Stanford]] by Mark Stefik and Peter Friedland, demonstrated that an [[object-oriented programming]] representation of knowledge can be used to plan gene-[[cloning]] experiments.
|-
! rowspan=6 | 1979
| Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of [[MYCIN]]'s representation of knowledge and style of reasoning in his [[EMYCIN]] program, the model for many commercial expert system "shells".
|-
| Jack Myers and Harry Pople at [[University of Pittsburgh]] developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' [[clinic]]al knowledge.
|-
| [[Cordell Green]], David Barstow, [[Elaine Kant]] and others at Stanford demonstrated the CHI system for [[automatic programming]].
|-
| The Stanford Cart, built by [[Hans Moravec]], becomes the first computer-controlled, [[autonomous robot|autonomous vehicle]] when it successfully traverses a chair-filled room and circumnavigates the [[Stanford AI Lab]].
|-
| BKG, a backgammon program written by [[Hans Berliner]] at [[Carnegie Mellon University|CMU]], defeats the reigning world champion (in part via luck).
|-
| Drew McDermott and Jon Doyle at [[MIT]], and John McCarthy at Stanford begin publishing work on [[non-monotonic logic]]s and formal aspects of truth maintenance.
|-
! Late 1970s
| Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.
|}
===1980s===
{{
{| class="wikitable
|-
!Date
! Development
|-
! 1980s
| [[Lisp machine]]s developed and marketed. First [[expert system]] shells and commercial applications.
|-
! 1980
| First National Conference of the [[American Association for Artificial Intelligence]] (AAAI) held at Stanford.
|-
! rowspan=2 |1981
| [[Danny Hillis]] designs the connection machine, which utilizes [[parallel computing]] to bring new power to AI, and to computation in general. (Later founds [[Thinking Machines Corporation]])
|-
|Stevo Bozinovski and Charles Anderson carry out first concurrent programming (task parallelism) in neural network research. A program, "CAA Controller" written and executed by Bozinovski interacts with the program "Inverted Pendulum Dynamics" written and executed by Anderson, using VAX/VMS mailboxes as a way of inter-program communication. The CAA controller learns to balance the simulated inverted pendulum.<ref>Bozinovski, Stevo (1981) "Inverted pendulum control program" ANW Memo, Adaptive Networks Group, Computer and Information Science Department, University of Massachusetts at Amherst, December 10, 1981</ref><ref>Bozinovski, Stevo and Anderson, Charles (1983) "Associative memory as controller of an unstable system: Simulation of a learning control" Proc. IEEE Mediterranean Electrotechnical Conference, C5.11., Athens, Greece"</ref><ref>Bozinovski, Stevo (1995) "Adaptive parallel distributed processing: Neural and genetic agents: Neuro-genetic agents and a structural theory of self-reinforcement learning systems" CMPSCI Technical Report 95-107, Computer Science Department, University of Massachusetts at Amherst</ref>
|-
! 1982
| The [[Fifth Generation Computer Systems project]] (FGCS), an initiative by Japan's [[Ministry of International Trade and Industry]], begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism.
|-
! rowspan=2 | 1983
| John Laird and Paul Rosenbloom, working with [[Allen Newell]], complete CMU dissertations on [[Soar (cognitive architecture)|Soar]] (program).
|-
| [[James F. Allen (computer scientist)|James F. Allen]] invents the Interval Calculus, the first widely used formalization of temporal events.
|-
! Mid-1980s
| Neural Networks become widely used with the [[Backpropagation]] [[algorithm]], also known as the reverse mode of [[automatic differentiation]] published by [[Seppo Linnainmaa]] in 1970 and applied to neural networks by [[Paul Werbos]].
|-
! 1985
| The autonomous drawing program, [[AARON]], created by [[Harold Cohen (artist)|Harold Cohen]], is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).
|-
! rowspan=2 | 1986
| The team of [[Ernst Dickmanns]] at [[Bundeswehr University of Munich]] builds the first robot cars, driving up to 55 mph on empty streets.
|-
| [[Barbara Grosz]] and [[Candace Sidner]] create the first computation model of [[discourse]], establishing the field of research.<ref>{{cite journal|last1=Grosz|first1=Barbara|last2=Sidner|first2=Candace L.|author2-link=Candace Sidner|title=Attention, Intentions, and the Structure of Discourse|journal=Computational Linguistics|date=1986|volume=12|issue=3|pages=175–204|url=https://dash.harvard.edu/handle/1/2579648|access-date=5 May 2017|archive-date=10 September 2017|archive-url=https://web.archive.org/web/20170910220353/https://dash.harvard.edu/handle/1/2579648|url-status=live}}</ref>
|-
! rowspan=3 | 1987
| Marvin Minsky published ''[[The Society of Mind]]'', a theoretical description of the mind as a collection of cooperating [[Intelligent agent|agents]]. He had been lecturing on the idea for years before the book came out (cf. Doyle 1983).<ref name="Henderson2007">{{cite book |author=Harry Henderson |title=Artificial Intelligence: Mirrors for the Mind |year=2007 |publisher=Infobase Publishing |___location=NY |isbn=978-1-60413-059-1 |chapter=Chronology |chapter-url=https://books.google.com/books?id=vKmIiICDIwgC&pg=PA165 |access-date=11 April 2015 |archive-date=15 March 2023 |archive-url=https://web.archive.org/web/20230315174512/https://books.google.com/books?id=vKmIiICDIwgC&pg=PA165 |url-status=live }}</ref>
|-
| Around the same time, [[Rodney Brooks]] introduced the [[subsumption architecture]] and [[behavior-based robotics]] as a more minimalist modular model of natural intelligence; [[Nouvelle AI]].
|-
| Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.<ref>{{cite journal|url=http://www.emeraldinsight.com/journals.htm?articleid=1696959&show=pdf|title=EmeraldInsight|journal=Planning Review |date=June 1989 |volume=17 |issue=6 |pages=22–27 |doi=10.1108/eb054275 |access-date=15 March 2015|archive-date=2 February 2014|archive-url=https://web.archive.org/web/20140202112925/http://www.emeraldinsight.com/journals.htm?articleid=1696959&show=pdf|url-status=live |last1=Cook |first1=Donald A. |last2=Sterling |first2=John W. }}</ref>
|-
! rowspan="2" | 1989
| The development of [[metal–oxide–semiconductor]] (MOS) [[Very-large-scale integration]] (VLSI), in the form of [[CMOS|complementary MOS]] (CMOS) technology, enabled the development of practical [[Neural network (machine learning)|artificial neural network]] (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book ''Analog VLSI Implementation of Neural Systems'' by Carver A. Mead and Mohammed Ismail.<ref name="Mead">{{cite book|url=http://fennetic.net/irc/Christopher%20R.%20Carroll%20Carver%20Mead%20Mohammed%20Ismail%20Analog%20VLSI%20Implementation%20of%20Neural%20Systems.pdf|title=Analog VLSI Implementation of Neural Systems|date=8 May 1989|publisher=[[Kluwer Academic Publishers]]|isbn=978-1-4613-1639-8|last1=Mead|first1=Carver A.|last2=Ismail|first2=Mohammed|series=The Kluwer International Series in Engineering and Computer Science|volume=80|___location=Norwell, MA|doi=10.1007/978-1-4613-1639-8|access-date=24 January 2020|archive-date=6 November 2019|archive-url=https://web.archive.org/web/20191106154442/http://fennetic.net/irc/Christopher%20R.%20Carroll%20Carver%20Mead%20Mohammed%20Ismail%20Analog%20VLSI%20Implementation%20of%20Neural%20Systems.pdf|url-status=live}}</ref>
|-
| Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network), which was used in the [[Navlab]] program.
|}
===1990s===
{{
{| class="wikitable
|-
!Date
! Development
|-
! 1990s
| Major advances in all areas of AI, with significant demonstrations in machine learning, [[computer assisted instruction|intelligent tutoring]], case-based reasoning, multi-agent planning, [[scheduling (computing)|scheduling]], uncertain reasoning, [[data mining]], natural language understanding and translation, vision, [[virtual reality]], games, and other topics.
|-
! Early 1990s
| [[TD-Gammon]], a [[backgammon]] program written by Gerry Tesauro, demonstrates that [[reinforcement]] (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.
|-
! 1991
| [[Dynamic Analysis and Replanning Tool|DART]] scheduling application deployed in the first [[Gulf War]] paid back [[DARPA|DARPA's]] investment of 30 years in AI research.<ref>[https://doi.ieeecomputersociety.org/10.1109/MIS.2002.1005635 DART: Revolutionizing Logistics Planning]</ref>
|-
! 1992
| [[Carol Stoker]] and NASA Ames robotics team explore marine life in Antarctica with an undersea robot [[Telepresence]] [[Remotely Operated Vehicle|ROV]] operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.<ref>{{Cite journal |url=http://adsabs.harvard.edu/abs/1995SPIE.2352..288S |title=From Antarctica to space: use of telepresence and virtual reality in control of a remote underwater vehicle |bibcode=1995SPIE.2352..288S |access-date=17 July 2019 |archive-date=17 July 2019 |archive-url=https://web.archive.org/web/20190717054452/http://adsabs.harvard.edu/abs/1995SPIE.2352..288S |url-status=live |last1=Stoker |first1=Carol R. |editor-first1=William J. |editor-first2=Wendell H. |editor-last1=Wolfe |editor-last2=Chun |journal=Mobile Robots IX |year=1995 |volume=2352 |page=288 |doi=10.1117/12.198976 |s2cid=128633069 }}</ref>
|-
! rowspan=3 | 1993
| [[Ian Horswill]] extended [[behavior-based robotics]] by creating Polly, the first robot to navigate using [[Computer vision|vision]] and operate at animal-like speeds (1 meter/second).
|-
| [[Rodney Brooks]], [[Lynn Andrea Stein]] and [[Cynthia Breazeal]] started the widely publicized [[MIT Cog project]] with numerous collaborators, in an attempt to build a [[humanoid robot]] child in just five years.
|-
| ISX corporation wins "DARPA contractor of the year"<ref>{{cite web|url=http://www.isx.com/projects/drpi.php|archive-url=https://web.archive.org/web/20060905171618/http://www.isx.com/projects/drpi.php|title=ISX Corporation|archive-date=5 September 2006|access-date=15 March 2015}}</ref> for the [[Dynamic Analysis and Replanning Tool]] (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.<ref>{{cite web|url=http://www.aaai.org/AITopics/html/military.html|title=DART overview|access-date=24 July 2007|archive-date=30 November 2006|archive-url=https://web.archive.org/web/20061130140821/http://www.aaai.org/AITopics/html/military.html|url-status=live}}</ref>
|-
! rowspan=4 | 1994
| [[Lotfi A. Zadeh]] at U.C. Berkeley creates "[[soft computing]]"<ref>Zadeh, Lotfi A., "Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77-84.</ref> and builds a world network of research with a fusion of neural science and [[neural net]] systems, [[fuzzy set]] theory and [[fuzzy systems]], evolutionary algorithms, [[genetic programming]], and [[chaos theory]] and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing", ''Communications of the ACM'', March 1994, Vol. 37 No. 3, pages 77–84).
|-
| With passengers on board, the twin robot cars [[VaMP]] and VITA-2 of [[Ernst Dickmanns]] and [[Daimler-Benz]] drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
|-
| [[English draughts]] ([[Checkers (game)|checkers]]) world champion [[Marion Tinsley|Tinsley]] resigned a match against computer program [[Chinook (draughts player)|Chinook]]. Chinook defeated 2nd highest rated player, [[Don Lafferty|Lafferty]]. Chinook won the USA National Tournament by the widest margin ever.
|-
| [[Cindy Mason]] at [[NASA]] organizes the First [[AAAI]] Workshop on AI and the Environment.<ref>{{Cite web|url=http://www.aiandenvironment.org/aaai-first-ai-env-workshop.html|title=AAAI-first-ai-env-workshop.HTML|access-date=28 July 2019|archive-date=28 July 2019|archive-url=https://web.archive.org/web/20190728221404/http://www.aiandenvironment.org/aaai-first-ai-env-workshop.html|url-status=live}}</ref>
|-
! rowspan=3 | 1995
| [[Cindy Mason]] at [[NASA]] organizes the First International [[IJCAI]] Workshop on AI and the Environment.<ref>{{Cite web|url=http://www.aiandenvironment.org/ijcai-first-ai-env-workshop.html|title=Ijcai-first-ai-env-workshop|access-date=28 July 2019|archive-date=28 July 2019|archive-url=https://web.archive.org/web/20190728221441/http://www.aiandenvironment.org/ijcai-first-ai-env-workshop.html|url-status=live}}</ref>
|-
| "No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for {{convert|2797|mi|km}} of the {{convert|2849|mi|km}}. Throttle and brakes were controlled by a human driver.<ref>{{cite web |url=https://www.cs.cmu.edu/afs/cs/user/tjochem/www/nhaa/nhaa_home_page.html |title=No Hands Across America Home Page |first1=Todd M. |last1=Jochem |first2=Dean A. |last2=Pomerleau |access-date=2015-10-20 |archive-date=27 September 2019 |archive-url=https://web.archive.org/web/20190927114407/http://www.cs.cmu.edu/afs/cs/user/tjochem/www/nhaa/nhaa_home_page.html |url-status=live }}</ref><ref>{{cite news |url=http://www.roboticstrends.com/article/back_to_the_future_autonomous_driving_in_1995 |title=Back to the Future: Autonomous Driving in 1995 |first=Todd |last=Jochem |work=Robotic Trends |access-date=2015-10-20 |archive-date=29 December 2017 |archive-url=https://web.archive.org/web/20171229081126/http://www.roboticstrends.com/article/back_to_the_future_autonomous_driving_in_1995 |url-status=live }}</ref>
|-
| One of [[Ernst Dickmanns]]' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from [[Munich]] to [[Copenhagen]] and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes.
|-
!1996
|[[Steve Grand (roboticist)|Steve Grand]], roboticist and computer scientist, develops and releases [[Creatures (video game series)|''Creatures'']], a popular simulation of artificial life-forms with simulated biochemistry, neurology with learning algorithms and inheritable digital DNA.
|-
! rowspan=4 | 1997
| The [[IBM Deep Blue|Deep Blue]] chess machine ([[IBM]]) defeats the (then) world [[chess]] champion, [[Garry Kasparov]].
|-
| First official [[RoboCup]] football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators.
|-
|Computer [[Reversi|Othello]] program [[Logistello]] defeated the world champion Takeshi Murakami with a score of 6–0.
|-
|[[Long short-term memory]] (LSTM) was published in Neural Computation by [[Sepp Hochreiter]] and [[Juergen Schmidhuber]].<ref name="lstm1997">{{Cite journal|last1=Hochreiter|first1=Sepp|author-link1=Sepp Hochreiter|author-link2=Jürgen Schmidhuber|last2=Schmidhuber|first2=Jürgen|s2cid=1915014|date=1 November 1997|title=Long Short-Term Memory|journal=Neural Computation|volume=9|issue=8|pages=1735–1780|doi=10.1162/neco.1997.9.8.1735|issn=0899-7667|pmid=9377276}}</ref>
|-
! rowspan=4 | 1998
| [[Tiger Electronics]]' [[Furby]] is released, and becomes the first successful attempt at producing a type of A.I to reach a [[domestic robot|domestic environment]].
|-
| [[Tim Berners-Lee]] published his [[Semantic Web|Semantic Web Road map]] paper.<ref>{{cite web |url=http://www.w3.org/DesignIssues/Semantic.html |title=Semantic Web roadmap |publisher=W3.org |access-date=24 November 2008 |archive-date=6 December 2003 |archive-url=https://web.archive.org/web/20031206090813/http://www.w3.org/DesignIssues/Semantic.html |url-status=live }}</ref>
|-
| [[Ulises Cortés]] and [[Miquel Sànchez-Marrè]] organize the first Environment and AI Workshop in Europe [[European Conference on Artificial Intelligence|ECAI]], "Binding Environmental Sciences and Artificial Intelligence".<ref>{{Cite journal|url = https://www.academia.edu/17759949|title = Binding Environmental Sciences and Artificial Intelligence|last1 = Mason|first1 = Cindy|last2 = Sànchez-Marrè|first2 = Miquel|journal = Environmental Modelling & Software|year = 1999|volume = 14|issue = 5|pages = 335–337|access-date = 27 October 2021|archive-date = 15 March 2023|archive-url = https://web.archive.org/web/20230315174527/https://www.academia.edu/17759949|url-status = live}}</ref><ref>{{Cite web|url=http://www.cs.upc.edu/~webia/besai/besai.html|title=BESAI - Homepage|access-date=12 August 2019|archive-date=4 July 2019|archive-url=https://web.archive.org/web/20190704080020/http://www.cs.upc.edu/~webia/besai/besai.html|url-status=live}}</ref>
|-
| [[Leslie P. Kaelbling]], [[Michael L. Littman]], and Anthony Cassandra introduce [[Partially observable Markov decision process|POMDP]]s and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics and [[automated planning and scheduling]]<ref>{{cite journal|last1=Kaelbling|first1=Leslie Pack|last2=Littman|first2=Michael L|last3=Cassandra|first3=Anthony R.|title=Planning and acting in partially observable stochastic domains|journal=Artificial Intelligence|date=1998|volume=101|issue=1–2|pages=99–134|url=http://people.csail.mit.edu/lpk/papers/aij98-pomdp.pdf|access-date=5 May 2017|doi=10.1016/s0004-3702(98)00023-x|doi-access=free|archive-date=17 May 2017|archive-url=https://web.archive.org/web/20170517053037/http://people.csail.mit.edu/lpk/papers/aij98-pomdp.pdf|url-status=live}}</ref>
|-
! 1999
| [[Sony]] introduces an improved domestic robot similar to a Furby, the [[AIBO]] becomes one of the first artificially intelligent "pets" that is also [[autonomous robot|autonomous]].
|-
! rowspan=3 | Late 1990s
| [[Web crawler]]s and other AI-based [[information extraction]] programs become essential in widespread use of the [[World Wide Web]].
|-
| Demonstration of an Intelligent room and Emotional Agents at [[Massachusetts Institute of Technology|MIT's]] AI Lab.
|-
| Initiation of work on the [[Project Oxygen|Oxygen architecture]], which connects mobile and stationary computers in an adaptive [[computer network|network]].
|}
==
===2000s===
{{More citations needed section|date=March 2007}}
{| class="wikitable
|-
!Date
! Development
|-
! rowspan=3 | 2000
| Interactive robopets ("[[smart toy]]s") become commercially available, realizing the vision of the 18th century novelty toy makers.
|-
|-
|
|-
! 2002
| [[iRobot]]'s [[Roomba]] autonomously vacuums the floor while navigating and avoiding obstacles.
|-
! rowspan=3 | 2004
| OWL [[Web Ontology Language]] W3C Recommendation (10 February 2004).
|-
| [[Defense Advanced Research Projects Agency|DARPA]] introduces the [[DARPA Grand Challenge]] requiring competitors to produce autonomous vehicles for prize money.
|-
| [[NASA]]'s robotic exploration rovers [[Spirit rover|Spirit]] and [[Opportunity rover|Opportunity]] autonomously navigate the surface of [[Mars]].
|-
! rowspan=3 | 2005
| [[Honda]]'s [[ASIMO]] robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering [[tray]]s to customers in restaurant settings.
|-
| [[Recommendation technology]] based on tracking web activity or media usage brings AI to marketing. See [[TiVo|TiVo Suggestions]].
|-
| [[Blue Brain]] is born, a project to simulate the brain at molecular detail.<ref>{{cite web|url=http://bluebrain.epfl.ch/|title=Bluebrain – EPFL|website=bluebrain.epfl.ch|access-date=2 January 2009|archive-date=19 March 2019|archive-url=https://web.archive.org/web/20190319213929/https://bluebrain.epfl.ch/|url-status=live}}</ref>
|-
! 2006
| The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) [[AI@50]] (14–16 July 2006)
|-
! rowspan=3 | 2007
| [[Phil. Trans. R. Soc. B|Philosophical Transactions of the Royal Society, B – Biology]], one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled ''Models of Natural [[action selection|Action Selection]]''<ref>{{cite web |url=http://www.pubs.royalsoc.ac.uk/index.cfm?page=1318 |title=Modelling natural action selection |publisher=Pubs.royalsoc.ac.uk |access-date=24 November 2008 |archive-date=30 September 2007 |archive-url=https://web.archive.org/web/20070930183534/http://www.pubs.royalsoc.ac.uk/index.cfm?page=1318 |url-status=live }}</ref>
|-
| [[Checkers]] is [[solved game|solved]] by a team of researchers at the [[University of Alberta]].
|-
| [[DARPA]] launches the [[DARPA Grand Challenge#2007 Urban Challenge|Urban Challenge]] for [[autonomous cars]] to obey traffic rules and operate in an urban environment.
|-
! 2008
|Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion".<ref>{{Cite web|url=https://www.researchgate.net/publication/260230014|title=Giving Robots Compassion, C. Mason, Conference on Science and Compassion, Poster Session, Telluride, Colorado, 2012.|website=ResearchGate|language=en|access-date=2019-07-17}}</ref>
|-
! 2009
| An [[LSTM]] trained by [[connectionist temporal classification]]<ref name="graves2006">{{Cite journal|last1=Graves|first1=Alex|author1-link=Alex Graves (computer scientist) | last2=Fernández|first2=Santiago|last3=Gomez|first3=Faustino|last4=Schmidhuber|first4=Juergen|author4-link=Juergen Schmidhuber| date=2006|title=Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks|journal=Proceedings of the International Conference on Machine Learning, ICML 2006|pages=369–376|citeseerx=10.1.1.75.6306}}</ref> was the first [[recurrent neural network]] to win [[pattern recognition]] contests, winning three competitions in connected [[handwriting recognition]].<ref>Graves, Alex; and Schmidhuber, Jürgen; ''Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks'', in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), ''Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC'', Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552</ref>{{sfn|Schmidhuber|2022}}
|-
! 2009
| [[Google]] builds an autonomous car.<ref>{{cite web|last=Fisher|first=Adam|title=Inside Google's Quest To Popularize Self-Driving Cars|url=http://www.popsci.com/cars/article/2013-09/google-self-driving-car|work=Popular Science|date=18 September 2013 |access-date=10 October 2013|archive-date=22 September 2013|archive-url=https://web.archive.org/web/20130922192206/http://www.popsci.com/cars/article/2013-09/google-self-driving-car|url-status=live}}</ref>
|}
===2010s===
{| class="wikitable"
|-
!Date
! Development
|-
! 2010
| [[Microsoft]] launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the [http://research.microsoft.com/en-us/groups/vision/default.aspx Computer Vision group] at [[Microsoft Research#Laboratories|Microsoft Research]], Cambridge.<ref>{{cite web|url=http://research.microsoft.com/en-us/people/jamiesho/|title=Jamie Shotton at Microsoft Research|website=Microsoft Research|access-date=3 February 2016|archive-date=3 February 2016|archive-url=https://web.archive.org/web/20160203165349/http://research.microsoft.com/en-us/people/jamiesho/|url-status=live}}</ref><ref>{{cite web|url=http://research.microsoft.com/en-us/projects/vrkinect/|title=Human Pose Estimation for Kinect – Microsoft Research|access-date=3 February 2016|archive-date=3 February 2016|archive-url=https://web.archive.org/web/20160203224732/http://research.microsoft.com/en-us/projects/vrkinect/|url-status=live}}</ref>
|-
! rowspan=2 | 2011
| [[Mary Lou Maher]] and [[Doug Fisher (academic)|Doug Fisher]] organize the First [[AAAI]] Workshop on AI and Sustainability.<ref>{{Cite web|url=http://dts-web1.it.vanderbilt.edu/~fisherdh//AI-Design-Sustainability.html|title=AAAI Spring Symposium - AI and Design for Sustainability|access-date=29 July 2019|archive-date=29 July 2019|archive-url=https://web.archive.org/web/20190729063022/http://dts-web1.it.vanderbilt.edu/~fisherdh//AI-Design-Sustainability.html|url-status=live}}</ref>
|-
| [[IBM]]'s [[IBM Watson|Watson]] computer defeated [[television]] [[game show]] ''[[Jeopardy!]]'' champions [[Brad Rutter|Rutter]] and [[Ken Jennings|Jennings]].
|-
! rowspan=1 | 2011–2014
| [[Apple Inc.|Apple]]'s [[Siri]] (2011), [[Google]]'s [[Google Now]] (2012) and [[Microsoft]]'s [[Cortana (virtual assistant)|Cortana]] (2014) are [[smartphone]] [[application software|apps]] that use [[natural language]] to answer questions, make recommendations and perform actions.
|-
! rowspan=1 | 2012
| [[AlexNet]], a [[deep learning]] model developed by [[Alex Krizhevsky]], wins the [[ImageNet Large Scale Visual Recognition Challenge]] with half as many errors as the second-place winner.<ref>{{Harvtxt|Christian|2020|p=24}}; {{Harvtxt|Russell|Norvig|2021|p=26}}</ref> This is a turning point in the history of AI; over the next few years dozens of other approaches to image recognition were abandoned in favor of [[deep learning]].{{sfnp|Wong|2023}} Krizhevsky is among the first to use GPU chips to train a deep learning network.{{sfn|Christian|2020|p=25}}
|-
! rowspan=2 | 2013
| [[Robot]] HRP-2 built by SCHAFT Inc of [[Japan]], a subsidiary of [[Google]], defeats 15 teams to win [[DARPA]]'s [[DARPA Robotics Challenge#Trials|Robotics Challenge Trials]]. HRP-2 scored 27 out of 32 points in eight tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.<ref>{{cite web|title=DARPA Robotics Challenge Trials|url=http://www.theroboticschallenge.org/|publisher=US Defense Advanced Research Projects Agency|access-date=25 December 2013|url-status=dead|archive-url=https://web.archive.org/web/20150611162358/http://theroboticschallenge.org/|archive-date=11 June 2015}}</ref>
|-
| [[Never-Ending Language Learning|NEIL]], the Never Ending Image Learner, is released at [[Carnegie Mellon University]] to constantly compare and analyze relationships between different images.<ref>{{cite web|url=http://www.cmu.edu/news/stories/archives/2013/november/nov20_webcommonsense.html|title=Carnegie Mellon Computer Searches Web 24/7 To Analyze Images and Teach Itself Common Sense|access-date=15 June 2015|archive-date=3 July 2015|archive-url=https://web.archive.org/web/20150703070024/http://www.cmu.edu/news/stories/archives/2013/november/nov20_webcommonsense.html|url-status=live}}</ref>
|-
! rowspan=4 | 2015
|Two techniques were developed concurrently to train very deep networks: [[highway network]],<ref name="highway20152">{{cite arXiv |eprint=1505.00387 |class=cs.LG |first1=Rupesh Kumar |last1=Srivastava |first2=Klaus |last2=Greff |title=Highway Networks |date=2 May 2015 |last3=Schmidhuber |first3=Jürgen}}</ref> and the [[residual neural network]] (ResNet).<ref name="resnet20152">{{Cite book |last1=He |first1=Kaiming |last2=Zhang |first2=Xiangyu |last3=Ren |first3=Shaoqing |last4=Sun |first4=Jian |title=2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |chapter=Deep Residual Learning for Image Recognition |date=2016 |publisher=IEEE |pages=770–778 |arxiv=1512.03385 |doi=10.1109/CVPR.2016.90 |isbn=978-1-4673-8851-1 }}</ref> They allowed over 1000-layers-deep networks to be trained.
|-
|In January 2015, [[Stephen Hawking]], [[Elon Musk]], and dozens of artificial intelligence experts signed an [[Open letter on artificial intelligence (2015)|open letter on artificial intelligence]] calling for research on the societal impacts of AI.<ref name=telegraph>{{cite news |last1=Sparkes |first1=Matthew |date=13 January 2015 |title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|access-date=24 April 2015 |work=[[The Daily Telegraph|The Telegraph (UK)]]}}</ref><ref>{{cite web |title=Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter |url=https://futureoflife.org/open-letter/ai-open-letter/ |publisher=Future of Life Institute |access-date=14 September 2023 }}</ref>
|-
|In July 2015, an open letter to ban development and use of autonomous weapons was signed by [[Stephen Hawking|Hawking]], [[Elon Musk|Musk]], [[Steve Wozniak|Wozniak]] and 3,000 researchers in AI and robotics.<ref>{{cite news|last1=Tegmark|first1=Max|title=Open Letter on Autonomous Weapons|url=http://futureoflife.org/open-letter-autonomous-weapons/|newspaper=Future of Life Institute|access-date=25 April 2016|archive-date=28 April 2016|archive-url=https://web.archive.org/web/20160428082500/http://futureoflife.org/open-letter-autonomous-weapons/|url-status=live}}</ref>
|-
| [[Google]] [[Google DeepMind|DeepMind]]'s [[AlphaGo]] (version: Fan)<ref name=":0">{{cite journal|first1= David|last1= Silver|author-link1= David Silver (programmer)|first2= Julian|last2= Schrittwieser|first3= Karen|last3= Simonyan|first4= Ioannis|last4= Antonoglou|first5= Aja|last5= Huang|author-link5= Aja Huang|first6= Arthur|last6= Guez|first7= Thomas|last7= Hubert|first8= Lucas|last8= Baker|first9= Matthew|last9= Lai|first10= Adrian|last10= Bolton|first11= Yutian|last11= Chen|author-link11= Chen Yutian|first12= Timothy|last12= Lillicrap|first13= Hui|last13= Fan|author-link13= Fan Hui|first14= Laurent|last14= Sifre|first15= George van den|last15= Driessche|first16= Thore|last16= Graepel|first17= Demis|last17= Hassabis|author-link17= Demis Hassabis|title= Mastering the game of Go without human knowledge|journal= [[Nature (journal)|Nature]]|issn= 0028-0836|pages= 354–359|volume= 550|issue= 7676|doi= 10.1038/nature24270|date= 19 October 2017|pmid= 29052630|bibcode= 2017Natur.550..354S|s2cid= 205261034|url= https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf|access-date= 27 September 2020|archive-date= 24 November 2020|archive-url= https://web.archive.org/web/20201124151015/https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf|url-status= live}}{{closed access}}</ref> defeated three-time European Go champion 2 dan professional [[Fan Hui]] by 5 games to 0.<ref>{{cite web|last1=Hassabis|first1=Demis|title=AlphaGo: using machine learning to master the ancient game of Go|url=https://googleblog.blogspot.co.il/2016/01/alphago-machine-learning-game-go.html|website=Google Blog|date=27 January 2016|access-date=25 April 2016|archive-date=7 May 2016|archive-url=https://web.archive.org/web/20160507135749/https://googleblog.blogspot.co.il/2016/01/alphago-machine-learning-game-go.html|url-status=live}}</ref>
|-
! 2016
| [[Google]] [[Google DeepMind|DeepMind]]'s [[AlphaGo]] (version: Lee)<ref name=":0" /> defeated [[Lee Sedol]] 4–1. Lee Sedol is a 9 dan professional Korean [[Go (game)|Go]] champion who won 27 major tournaments from 2002 to 2016.<ref>{{cite web|last1=Ormerod|first1=David|title=AlphaGo defeats Lee Sedol 4–1 in Google DeepMind Challenge Match|url=https://gogameguru.com/alphago-defeats-lee-sedol-4-1/|website=Go Game Guru|access-date=25 April 2016|archive-date=17 March 2016|archive-url=https://web.archive.org/web/20160317095008/https://gogameguru.com/alphago-defeats-lee-sedol-4-1/|url-status=usurped}}</ref>
|-
! rowspan=8 | 2017
| [[Asilomar Conference on Beneficial AI]] was held, to discuss [[AI ethics]] and how to bring about [[beneficial AI]] while avoiding the [[existential risk from artificial general intelligence]].
|-
|Deepstack<ref>{{Cite journal|last1=Moravčík|first1=Matej|last2=Schmid|first2=Martin|last3=Burch|first3=Neil|last4=Lisý|first4=Viliam|last5=Morrill|first5=Dustin|last6=Bard|first6=Nolan|last7=Davis|first7=Trevor|last8=Waugh|first8=Kevin|last9=Johanson|first9=Michael|last10=Bowling|first10=Michael|date=2017-05-05|title=DeepStack: Expert-level artificial intelligence in heads-up no-limit poker|journal=Science|language=en|volume=356|issue=6337|pages=508–513|doi=10.1126/science.aam6960|issn=0036-8075|pmid=28254783|arxiv=1701.01724|bibcode=2017Sci...356..508M|s2cid=1586260}}</ref> is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limit [[poker]]. Soon after, the poker AI [[Libratus]] by different research group individually defeated each of its four-human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample.<ref>{{Cite news|url=https://www.pokerlistings.com/libratus-poker-ai-smokes-humans-for-1-76m-is-this-the-end-42839|title=Libratus Poker AI Beats Humans for $1.76m; Is End Near?|date=30 January 2017|newspaper=PokerListings|access-date=2018-03-16|archive-date=17 March 2018|archive-url=https://web.archive.org/web/20180317035800/https://www.pokerlistings.com/libratus-poker-ai-smokes-humans-for-1-76m-is-this-the-end-42839|url-status=live}}</ref> In contrast to Chess and Go, Poker is an [[imperfect information]] game.<ref name="the Guardian">{{cite news|last1=Solon|first1=Olivia|title=Oh the humanity! Poker computer trounces humans in big step for AI|url=https://www.theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition|access-date=19 March 2018|work=The Guardian|date=30 January 2017|archive-date=8 April 2018|archive-url=https://web.archive.org/web/20180408143136/https://www.theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition|url-status=live}}</ref>
|-
| In May 2017, [[Google]] [[Google DeepMind|DeepMind]]'s [[Master (software)|AlphaGo (version: Master)]] beat [[Ke Jie]], who at the time continuously held the world No. 1 ranking for two years,<ref>{{Cite web|title=柯洁迎19岁生日 雄踞人类世界排名第一已两年|url=http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml|language=zh|date=May 2017|access-date=4 September 2021|archive-date=11 August 2017|archive-url=https://web.archive.org/web/20170811222849/http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml|url-status=live}}</ref><ref>{{Cite web|url=http://www.goratings.org/|title=World's Go Player Ratings|date=24 May 2017|access-date=4 September 2021|archive-date=1 April 2017|archive-url=https://web.archive.org/web/20170401123616/https://www.goratings.org/|url-status=live}}</ref> winning each game in a [[AlphaGo versus Ke Jie|three-game match]] during the [[Future of Go Summit]].<ref name="wuzhensecond">{{cite magazine|url=https://www.wired.com/2017/05/googles-alphago-continues-dominance-second-win-china/|title=Google's AlphaGo Continues Dominance With Second Win in China|magazine=Wired|date=2017-05-25|access-date=4 September 2021|archive-date=27 May 2017|archive-url=https://web.archive.org/web/20170527103927/https://www.wired.com/2017/05/googles-alphago-continues-dominance-second-win-china/|url-status=live}}</ref><ref>{{cite magazine|url=https://www.wired.com/2017/05/win-china-alphagos-designers-explore-new-ai/|title=After Win in China, AlphaGo's Designers Explore New AI|magazine=Wired|date=2017-05-27|access-date=4 September 2021|archive-date=2 June 2017|archive-url=https://web.archive.org/web/20170602234726/https://www.wired.com/2017/05/win-china-alphagos-designers-explore-new-ai/|url-status=live}}</ref>
|-
|A [[propositional logic]] [[boolean satisfiability problem]] (SAT) solver proves a long-standing mathematical conjecture on [[Pythagorean triples]] over the set of integers. The initial proof, 200TB long, was checked by two independent certified automatic proof checkers.<ref>{{cite web|url=https://cacm.acm.org/magazines/2017/8/219606-the-science-of-brute-force/fulltext|title=The Science of Brute Force|date=August 2017|work=ACM Communications|access-date=5 October 2018|archive-date=29 August 2017|archive-url=https://web.archive.org/web/20170829161450/https://cacm.acm.org/magazines/2017/8/219606-the-science-of-brute-force/fulltext|url-status=live}}</ref>
|-
|An [[OpenAI]] [[video game bot|bot]] using machine learning played at [[The International 2017]] ''[[Dota 2]]'' tournament in August 2017. It won during a [[1v1]] demonstration game against professional ''Dota 2'' player [[Dendi (Dota player)|Dendi]].<ref>{{cite news|url=https://blog.openai.com/dota-2/|title=Dota 2|newspaper=Openai Blog |date=11 August 2017|access-date=7 November 2017|archive-date=11 August 2017|archive-url=https://web.archive.org/web/20170811235617/https://blog.openai.com/dota-2/|url-status=live}}</ref>
|-
|[[Google Lens]] image analysis and comparison tool released in October 2017, associates millions of landscapes, artworks, products and species to their text description.
|-
|Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewer [[tensor processing unit]]s (as compared to AlphaGo Lee; it used same amount of TPU's as AlphaGo Master).<ref name=":0" /> Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11.<ref name=":0" /> Although unsupervised learning is a step forward, much has yet to be learned about general intelligence.<ref>{{cite web|title=AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor|url=https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/|first=Larry|last=Greenemeier|date=18 October 2017|publisher=Scientific American|access-date=18 October 2017|archive-date=19 October 2017|archive-url=https://web.archive.org/web/20171019230611/https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/|url-status=live}}</ref><!-- this last sentence seems irrelevant to the timeline --> AlphaZero masters chess in four hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw.
|-
|[[Transformer (machine learning model)|Transformer]] architecture was invented, which led to new kinds of [[large language model]]s such as [[BERT (language model)|BERT]] by Google, followed by the [[generative pre-trained transformer]] type of model introduced by OpenAI.
|-
! rowspan=3 | 2018
|Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.<ref>[https://www.bloomberg.com/news/articles/2018-01-15/alibaba-s-ai-outgunned-humans-in-key-stanford-reading-test Alibaba's AI Outguns Humans in Reading Test] {{Webarchive|url=https://web.archive.org/web/20180117011914/https://www.bloomberg.com/news/articles/2018-01-15/alibaba-s-ai-outgunned-humans-in-key-stanford-reading-test |date=17 January 2018 }}. 15 January 2018</ref>
|-
|The European Lab for Learning and Intelligent Systems (''aka'' Ellis) proposed as a pan-European competitor to American AI efforts, with the aim of staving off a [[Brain-drain|brain drain]] of talent, along the lines of [[CERN]] after World War II.<ref>{{Cite news |url=https://www.theguardian.com/science/2018/apr/23/scientists-plan-huge-european-ai-hub-to-compete-with-us |title=Scientists plan huge European AI hub to compete with US |last=Sample |first=Ian |date=23 April 2018 |work=The Guardian |access-date=2018-04-23 |edition=US |archive-date=24 April 2018 |archive-url=https://web.archive.org/web/20180424024625/https://www.theguardian.com/science/2018/apr/23/scientists-plan-huge-european-ai-hub-to-compete-with-us |url-status=live }}</ref>
|-
|Announcement of [[Google Duplex]], a service to allow an AI assistant to book appointments over the phone. The ''[[Los Angeles Times]]'' judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech.<ref>{{cite news|last1=Pierson|first1=David|title=Should people know they're talking to an algorithm? After a controversial debut, Google now says yes|url=https://www.latimes.com/business/technology/la-fi-tn-virtual-assistants-20180509-story.html|access-date=17 May 2018|work=Los Angeles Times|date=2018|archive-date=17 May 2018|archive-url=https://web.archive.org/web/20180517065036/http://www.latimes.com/business/technology/la-fi-tn-virtual-assistants-20180509-story.html|url-status=live}}</ref>
|-
! 2019
|DeepMind's AlphaStar reaches Grandmaster level at ''StarCraft II'', outperforming 99.8 percent of human players.<ref>{{cite news|last1=Sample|first1=Ian|title=AI becomes grandmaster in 'fiendishly complex' StarCraft II|url=https://www.theguardian.com/technology/2019/oct/30/ai-becomes-grandmaster-in-fiendishly-complex-starcraft-ii|access-date=30 July 2021|work=The Guardian|date=2019|archive-date=29 December 2020|archive-url=https://web.archive.org/web/20201229185547/https://www.theguardian.com/technology/2019/oct/30/ai-becomes-grandmaster-in-fiendishly-complex-starcraft-ii|url-status=live}}</ref>
|}
===2020s===
{{See also|2020s in computing}}
{{update|part=section|date=September 2023}}
[[File:20250202 "AI" (search term) on Google Trends.svg|thumb|The number of the public's Google searches for the term "AI" began to accelerate in 2022.]]
{| class="wikitable"
|-
!Date
! Development
|-
! rowspan=3 | 2020
|In February 2020, Microsoft introduces its Turing Natural Language Generation (T-NLG), which is the "largest language model ever published at 17 billion parameters".<ref name="Wired_Sterling_20200213">{{Cite magazine| issn = 1059-1028| last = Sterling| first = Bruce| title = Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG)| magazine = Wired| access-date = July 31, 2020| date = February 13, 2020| url = https://www.wired.com/beyond-the-beyond/2020/02/web-semantics-microsoft-project-turing-introduces-turing-natural-language-generation-t-nlg/| archive-date = 4 November 2020| archive-url = https://web.archive.org/web/20201104163637/https://www.wired.com/beyond-the-beyond/2020/02/web-semantics-microsoft-project-turing-introduces-turing-natural-language-generation-t-nlg/| url-status = live}}</ref>
|-
|In November 2020, [[AlphaFold]] 2 by DeepMind, a model that performs [[Protein structure prediction|predictions of protein structure]], wins the [[CASP]] competition.<ref>{{cite news |last1=Sample |first1=Ian |title=Google's DeepMind predicts 3D shapes of proteins |url=https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins |access-date=19 July 2019 |work=The Guardian |date=2 December 2018}}</ref>
|-
|[[OpenAI]] introduces [[GPT-3]], a state-of-the-art autoregressive language model that uses [[deep learning]] to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. It was introduced in May 2020,<ref name="arXiv_Brown_20200722">{{cite arXiv| last1 = Brown| first1 = Tom B.| last2 = Mann| first2 = Benjamin| last3 = Ryder| first3 = Nick| last4 = Subbiah| first4 = Melanie| last5 = Kaplan| first5 = Jared| last6 = Dhariwal| first6 = Prafulla| title = Language Models are Few-Shot Learners|date = July 22, 2020| class = cs.CL| eprint = 2005.14165}}</ref> and was in beta testing in June 2020.
|-
! rowspan=2 | 2022
| [[ChatGPT]], an AI [[chatbot]] developed by [[OpenAI]], debuts in November 2022. It is initially built on top of the {{nowrap|[[GPT-3.5]]}} [[large language model]]. While it gains considerable praise for the breadth of its knowledge base, deductive abilities, and the human-like fluidity of its natural language responses,<ref>{{Cite web |last=Thompson |first=Derek |date=December 8, 2022 |title=Breakthroughs of the Year |url=https://www.theatlantic.com/newsletters/archive/2022/12/technology-medicine-law-ai-10-breakthroughs-2022/672390/ |access-date=December 18, 2022 |website=[[The Atlantic]] |archive-date=January 15, 2023 |archive-url=https://web.archive.org/web/20230115142130/https://www.theatlantic.com/newsletters/archive/2022/12/technology-medicine-law-ai-10-breakthroughs-2022/672390/ |url-status=live}}</ref><ref>{{cite news |last1=Scharth |first1=Marcel |title=The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive |url=https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908 |date=December 5, 2022 |access-date=December 30, 2022 |work=The Conversation |language=en |archive-date=January 19, 2023 |archive-url=https://web.archive.org/web/20230119175104/https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908 |url-status=live}}</ref> it also garners criticism for, among other things, its tendency to "[[Hallucination (artificial intelligence)|hallucinate]]",<ref>{{cite news |title=ChatGPT a 'landmark event' for AI, but what does it mean for the future of human labor and disinformation? |url=https://www.cbc.ca/radio/thecurrent/chatgpt-human-labour-and-fake-news-1.6686210 |access-date=December 18, 2022 |work=CBC |date=December 15, 2022 |first=Mouhamad |last=Rachini |archive-date=January 19, 2023 |archive-url=https://web.archive.org/web/20230119175104/https://www.cbc.ca/radio/thecurrent/chatgpt-human-labour-and-fake-news-1.6686210 |url-status=live}}</ref><ref name="TheVergeStackOverflow">{{Cite web |last=Vincent |first=James |date=December 5, 2022 |title=AI-generated answers temporarily banned on coding Q&A site Stack Overflow |url=https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers |access-date=December 5, 2022 |work=[[The Verge]] |language=en-US |archive-date=January 17, 2023 |archive-url=https://web.archive.org/web/20230117153621/https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers |url-status=live}}</ref> a phenomenon in which an AI responds with factually incorrect answers with high confidence. The release triggers widespread public discussion on artificial intelligence and its potential impact on society.<ref name="BloombergCowen">{{Cite web |url=https://www.bloomberg.com/opinion/articles/2022-12-06/chatgpt-ai-could-make-democracy-even-more-messy |title=ChatGPT Could Make Democracy Even More Messy |date= December 6, 2022 |last=Cowen |first=Tyler |author-link=Tyler Cowen |work=[[Bloomberg News]] |access-date=December 6, 2022 |archive-date=December 7, 2022 |archive-url=https://web.archive.org/web/20221207105203/https://www.bloomberg.com/opinion/articles/2022-12-06/chatgpt-ai-could-make-democracy-even-more-messy |url-status=live}}</ref><ref>{{cite news |title=The Guardian view on ChatGPT: an eerily good human impersonator |url=https://www.theguardian.com/commentisfree/2022/dec/08/the-guardian-view-on-chatgpt-an-eerily-good-human-impersonator |access-date=December 18, 2022 |work=The Guardian |date= December 8, 2022 |language=en |archive-date=January 16, 2023 |archive-url=https://web.archive.org/web/20230116161202/https://www.theguardian.com/commentisfree/2022/dec/08/the-guardian-view-on-chatgpt-an-eerily-good-human-impersonator |url-status=live}}</ref>
|-
|A November 2022 class action lawsuit against [[Microsoft]], [[GitHub]] and [[OpenAI]] alleges that [[GitHub Copilot]], an AI-powered code editing tool trained on public GitHub repositories, violates the copyrights of the repositories' authors, noting that the tool is able to generate source code which matches its training data verbatim, without providing attribution.<ref name="Verge copilot">{{Cite web |last=Vincent |first=James |date=2022-11-08 |title=The lawsuit that could rewrite the rules of AI copyright |url=https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data |access-date=2022-12-07 |website=The Verge |language=en-US}}</ref>
|-
! rowspan="17" | 2023
|By January 2023, [[ChatGPT]] has more than 100 million users, making it the fastest-growing consumer application to date.<ref>{{Cite news |last=Milmo |first=Dan |date=December 2, 2023 |title=ChatGPT reaches 100 million users two months after launch |language=en-GB |work=The Guardian |url=https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app |access-date=February 3, 2023 |issn=0261-3077 |archive-date=February 3, 2023 |archive-url=https://web.archive.org/web/20230203051356/https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app |url-status=live}}</ref>
|-
|On January 16, 2023, three artists, [[Sarah Andersen]], Kelly McKernan, and Karla Ortiz, file a class-action [[copyright infringement]] lawsuit against [[Stability AI]], [[Midjourney]], and [[DeviantArt]], claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.<ref>{{Cite web|url=https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart|title=AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit|first=James|last=Vincent|date=January 16, 2023|website=The Verge}}</ref>
|-
|On January 17, 2023, Stability AI is sued in London by [[Getty Images]] for using its images in their training data without purchasing a license.<ref name="cnn-getty">{{Cite web |last=Korn |first=Jennifer |date=2023-01-17 |title=Getty Images suing the makers of popular AI art tool for allegedly stealing photos |url=https://www.cnn.com/2023/01/17/tech/getty-images-stability-ai-lawsuit/index.html |access-date=2023-01-22 |website=CNN |language=en}}</ref><ref name="GettyPress23">{{cite web |url=https://newsroom.gettyimages.com/en/getty-images/getty-images-statement |title= Getty Images Statement|author=<!--Not stated--> |date=17 January 2023 |website=newsroom.gettyimages.com/ |publisher=CNN |access-date=24 January 2023 |quote=}}</ref>
|-
| Getty files another suit against Stability AI in a US district court in Delaware on February 6, 2023. In the suit, Getty again alleges copyright infringement for the use of its images in the training of [[Stable Diffusion]], and further argues that the model infringes Getty's [[trademark]] by generating images with Getty's [[watermark]].<ref name=ars-getty>{{cite web
|work=Ars Technica
|title=Getty sues Stability AI for copying 12M photos and imitating famous watermark
|last=Belanger|first=Ashley
|date=6 February 2023
|url=https://arstechnica.com/tech-policy/2023/02/getty-sues-stability-ai-for-copying-12m-photos-and-imitating-famous-watermark/
}}</ref>
|-
| [[OpenAI]]'s {{nowrap|[[GPT-4]]}} model is released in March 2023 and is regarded as an impressive improvement over {{nowrap|[[GPT-3.5]]}}, with the caveat that GPT-4 retains many of the same problems of the earlier iteration.<ref>{{cite news |last1=Belfield |first1=Haydn |title=If your AI model is going to sell, it has to be safe |url=https://www.vox.com/future-perfect/2023/3/25/23655082/ai-openai-gpt-4-safety-microsoft-facebook-meta |access-date=30 March 2023 |work=Vox |date=25 March 2023 |language=en |archive-date=March 28, 2023 |archive-url=https://web.archive.org/web/20230328192017/https://www.vox.com/future-perfect/2023/3/25/23655082/ai-openai-gpt-4-safety-microsoft-facebook-meta |url-status=live }}</ref> Unlike previous iterations, GPT-4 is multimodal, allowing image input as well as text. GPT-4 is integrated into ChatGPT as a subscriber service. OpenAI claims that in their own testing the model received a score of 1410 on the [[SAT]] (94th percentile),<ref name=":1">{{Cite web |date=2022 |title=SAT: Understanding Scores |url=https://satsuite.collegeboard.org/media/pdf/understanding-sat-scores.pdf |access-date=21 March 2023 |website=[[College Board]] |archive-date=March 16, 2023 |archive-url=https://web.archive.org/web/20230316022540/https://satsuite.collegeboard.org/media/pdf/understanding-sat-scores.pdf |url-status=live }}</ref> 163 on the [[LSAT]] (88th percentile), and 298 on the [[Uniform Bar Exam]] (90th percentile).<ref name="gpt4_tech_report">{{Cite arXiv |last=OpenAI |year=2023 |title=GPT-4 Technical Report |class=cs.CL |eprint=2303.08774}}</ref>
|-
|On March 7, 2023, ''[[Nature Biomedical Engineering]]'' writes that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."<ref name="ZDTUM">{{cite journal |date=7 March 2023 |title=Prepare for truly useful large language models |journal=Nature Biomedical Engineering |language=en |volume=7 |issue=2 |pages=85–86 |doi=10.1038/s41551-023-01012-6 |pmid=36882584 |s2cid=257403466|doi-access=free }}</ref>
|-
|In response to ChatGPT, [[Google]] releases in a limited capacity its chatbot [[Google Bard]], based on the [[LaMDA]] and [[PaLM]] large language models, in March 2023.<ref>{{cite news |last=Elias |first=Jennifer |date=January 31, 2023 |title=Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard' |language=en |work=CNBC |url=https://www.cnbc.com/2023/01/31/google-testing-chatgpt-like-chatbot-apprentice-bard-with-employees.html |url-status=live |access-date=February 2, 2023 |archive-url=https://web.archive.org/web/20230202151722/https://www.cnbc.com/2023/01/31/google-testing-chatgpt-like-chatbot-apprentice-bard-with-employees.html |archive-date=February 2, 2023}}</ref><ref>{{cite news |last1=Elias |first1=Jennifer |title=Google asks employees to rewrite Bard's bad responses, says the A.I. 'learns best by example' |url=https://www.cnbc.com/2023/02/15/google-asks-employees-to-rewrite-bards-incorrect-responses-to-queries.html |access-date=16 February 2023 |work=CNBC |date=February 2023 |language=en |archive-date=February 16, 2023 |archive-url=https://web.archive.org/web/20230216072950/https://www.cnbc.com/2023/02/15/google-asks-employees-to-rewrite-bards-incorrect-responses-to-queries.html |url-status=live }}</ref>
|-
| On March 29, 2023, a petition of over 1,000 signatures is signed by Elon Musk, Steve Wozniak and other tech leaders, calling for a 6-month halt to what the petition refers to as "an out-of-control race" producing AI systems that its creators can not "understand, predict, or reliably control".<ref>{{cite news |last1=Ortiz |first1=Sabrina |date=March 29, 2023 |title=Musk, Wozniak, and other tech leaders sign petition to halt further AI developments |language=en |work=ZD Net |url=https://www.zdnet.com/article/musk-wozniak-and-other-tech-leaders-sign-petition-to-halt-ai-developments/ |access-date=September 13, 2023 }}</ref><ref>{{cite web |title=Pause Giant AI Experiments: An Open Letter |publisher=Future of Life Institute |url=https://futureoflife.org/open-letter/pause-giant-ai-experiments/ |access-date=September 13, 2023 }}</ref>
|-
|In May 2023, Google makes an announcement regarding Bard's transition from LaMDA to PaLM2, a significantly more advanced language model.<ref>{{Cite journal |last1=Lappalainen |first1=Yrjo |last2=Narayanan |first2=Nikesh |date=2023-06-14 |title=Aisha: A Custom AI Library Chatbot Using the ChatGPT API |url=https://www.tandfonline.com/doi/full/10.1080/19322909.2023.2221477 |journal=Journal of Web Librarianship |volume=17 |issue=3 |language=en |pages=37–58 |doi=10.1080/19322909.2023.2221477 |s2cid=259470901 |issn=1932-2909|url-access=subscription }}</ref>
|-
| In the last week of May 2023, a [[Statement on AI risk of extinction|Statement on AI Risk]] is signed by [[Geoffrey Hinton]], [[Sam Altman]], [[Bill Gates]], and many other prominent AI researchers and tech leaders with the following succinct message: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."<ref>{{cite web |title=Statement on AI Risk AI experts and public figures express their concern about AI risk. |publisher=Center for AI Safety |url=https://www.safe.ai/statement-on-ai-risk#open-letter |access-date=September 14, 2023 }}</ref><ref>{{cite news |last1=Edwards |first1=Benj |date=May 30, 2023 |title=OpenAI execs warn of "risk of extinction" from artificial intelligence in new open letter |language=en |work=Ars Technica |url=https://arstechnica.com/information-technology/2023/05/openai-execs-warn-of-risk-of-extinction-from-artificial-intelligence-in-new-open-letter/ |access-date=September 14, 2023 }}</ref>
|-
|On July 9, 2023, [[Sarah Silverman]] files a class action lawsuit against Meta and OpenAI for copyright infringement for training their large language models on millions of authors' copyrighted works without permission.<ref>{{cite news |last1=Queen |first1=Jack |date=July 10, 2023 |title=Sarah Silverman sues Meta, OpenAI for copyright infringement |language=en |publisher=Reuters |url=https://www.reuters.com/legal/sarah-silverman-sues-meta-openai-copyright-infringement-2023-07-09/ |access-date=September 14, 2023 }}</ref>
|-
|In August, 2023, the New York Times, CNN, Reuters, the Chicago Tribune, Australian Broadcasting Corporation (ABC) and other news companies block OpenAI's GPTBot web crawler from accessing their content, while the New York Times also updates its terms of service to disallow the use of its content in large language models.<ref>{{cite web |last1=Bogle |first1=Ariel |date=August 24, 2023 |title=New York Times, CNN and Australia's ABC block OpenAI's GPTBot web crawler from accessing content |language=en |work=The Guardian |url=https://www.theguardian.com/technology/2023/aug/25/new-york-times-cnn-and-abc-block-openais-gptbot-web-crawler-from-scraping-content |access-date=September 14, 2023 }}</ref>
|-
|On September 13, 2023, in a serious response to growing anxiety about the dangers of AI, the US Senate holds the inaugural bipartisan "[[A.I. Insight forums|AI Insight Forum]]", bringing together senators, CEOs, civil rights leaders and other industry reps, to further familiarize senators with the nature of AI and its risks, and to discuss needed safeguards and legislation.<ref name="AI_Legislation_Coming">{{cite news |last1=Johnson |first1=Ted |date=September 13, 2023 |title=Elon Musk Says "Something Good Will Come Of This" After Senate's AI Forum, Chuck Schumer Signals AI Legislation Coming "In The General Category Of Months" — Update |language=en |work=Deadline |url=https://deadline.com/2023/09/senate-ai-insight-forum-wga-elon-musk-mark-zuckerberg-1235545470/ |access-date=September 13, 2023}}</ref> The event is organized by Senate Majority Leader [[Chuck Schumer]] (D-NY),<ref name="NYTimes_Titans">{{cite news |last1=Kang |first1=Cecelia |date=September 13, 2023 |title="In Show of Force, Silicon Valley Titans Pledge 'Getting This Right' With A.I." |language=en |work=The New York Times |url=https://www.nytimes.com/2023/09/13/technology/silicon-valley-ai-washington-schumer.html |access-date=September 13, 2023}}</ref> and is chaired by U.S. Senator [[Martin Heinrich]] (D-N.M.), Founder and co-chair of the Senate AI Caucus.<ref>{{citation |title=Read Out: Heinrich Convenes First Bipartisan Senate AI Insight Forum |date=13 September 2023 |url=https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum |access-date=13 September 2023}}</ref> Reflecting the importance of the meeting, the forum is attended by over 60 senators,<ref name="CNBC_Senate_AI_Forum">{{cite news |last1=Feiner |first1=Lauren |date=September 13, 2023 |title=Elon Musk, Mark Zuckerberg, Bill Gates and other tech leaders in closed Senate session about AI |language=en |work=CNBC |url=https://www.cnbc.com/2023/09/13/musk-zuckerberg-among-tech-leaders-visiting-senate-to-speak-about-ai-.html |access-date=September 13, 2023}}</ref> as well as [[Elon Musk]] (Tesla CEO), [[Mark Zuckerberg]] (Meta CEO), [[Sam Altman]] (OpenAI CEO), [[Sundar Pichai]] (Alphabet CEO), [[Bill Gates]] (Microsoft co-founder), [[Satya Nadella]] (Microsoft CEO), [[Jensen Huang]] (Nvidia CEO), [[Arvind Krishna]] (IBM CEO), [[Alex Karp]] (Palantir CEO), [[Charles Rivkin]] (chairman and CEO of the MPA), [[Meredith Stiehm]] (president of the Writers Guild of America West), [[Liz Shuler]] ([[AFL-CIO]] President), and [[Maya Wiley]] (CEO of the [[Leadership Conference on Civil and Human Rights]]), among others.<ref name="AI_Legislation_Coming" /><ref name="NYTimes_Titans" /><ref name="CNBC_Senate_AI_Forum" />
|-
|
In October 2023, AlpineGate AI Technologies Inc. CEO John Godel announced the launch of their AI Suite, AGImageAI, along with their proprietary GPT model, AlbertAGPT.<ref>{{Cite web | title=AI Design for AlpineGate | url=https://alpinegateai.com/ | archive-url=https://web.archive.org/web/20230724011611/https://alpinegateai.com/ | archive-date=2023-07-24}}</ref>
|-
|On October 30, 2023, US President Biden signed the ''[[Executive Order 14110|Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence]].''<ref>{{cite news |last1=Morrison |first1=Sara |date=October 31, 2023 |title=President Biden's new plan to regulate AI. Now comes the hard part: Congress. |language=en |work=Vox News |url=https://www.vox.com/technology/2023/10/31/23939157/biden-ai-executive-order |access-date=November 3, 2023}}</ref><ref>{{citation |title=Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence |date=30 October 2023 |url=https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ |access-date=3 November 2023}}</ref>
|-
|In November 2023, the first global [[2023 AI Safety Summit|AI Safety Summit]] was held in [[Bletchley Park]] in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.<ref>{{Cite news |last=Milmo |first=Dan |date=3 November 2023 |title=Hope or Horror? The great AI debate dividing its pioneers |pages=10–12 |work=[[The Guardian Weekly]]}}</ref> 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.<ref name="2023-11-01-bletchley-declaration-full">{{cite web |title=The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 |url=https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 |website=GOV.UK |access-date=2 November 2023 |archive-url=https://web.archive.org/web/20231101123904/https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 |archive-date=1 November 2023 |date=1 November 2023}}</ref><ref>{{Cite press release |title=Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration |url=https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchley-declaration |access-date=1 November 2023 |website=GOV.UK |archive-date=1 November 2023 |archive-url=https://web.archive.org/web/20231101115016/https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchley-declaration |url-status=live }}</ref>
|-
|[[Google]] releases [[Gemini (language model)|Gemini]] 1.0 Ultra.
|-
! rowspan="7" |2024
|On February 15, 2024, [[Google]] releases [[Gemini 1.5]] in limited beta, capable of context length up to 1 million tokens.
|-
|Also, on February 15, 2024, [[OpenAI]] publicly announces [[Sora (text-to-video model)|Sora]], a text-to-video model for generating videos up to a minute long.
|-
|[[Google DeepMind]] unveils DNA prediction software AlphaFold which helps to identify cancer and genetic diseases.
|-
|On February 22, [[Stability AI|StabilityAI]] announces [[Stable Diffusion]] 3, using a similar architecture to [[Sora (text-to-video model)|Sora]].
|-
|On May 14, [[Google]] adds an "AI overview" to Google searches.
|-
|On June 10, [[Apple Inc.|Apple]] announced "[[Apple Intelligence]]" which incorporates [[ChatGPT]] into new [[iPhone]]s and [[Siri]].
|-
|On October 9, co-founder and CEO of [[Google DeepMind]] and [[Isomorphic Labs]] Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing [[AlphaFold]], a groundbreaking AI system that predicts the 3D structure of proteins from their amino acid sequences.
|-
! rowspan="20" |2025
|On February 6, [[Mistral AI]] releases Le Chat, an AI assistant able to answer up to 1,000 words per second.<ref>{{Cite news|last=Dillet|first=Romain|date=6 February 2025|title=Mistral releases its AI assistant on iOS and Android|url=https://techcrunch.com/2025/02/06/mistral-releases-its-ai-assistant-on-ios-and-android/|access-date=February 11, 2025|work=[[TechCrunch]]|language=en-us}}</ref>
|-
| [[Stargate UAE]] invests to build Europe's largest AI data center in France.<ref>{{Cite news |last1=Maccioni |first1=Federico |last2=Saini |first2=Manya |last3=Saba |first3=Yousef |last4=Saini |first4=Manya |last5=Saba |first5=Yousef |date=2025-05-15 |title=UAE to build biggest AI campus outside US in Trump deal, bypassing past China worries |url=https://www.reuters.com/world/china/uae-set-deepen-ai-links-with-united-states-after-past-curbs-over-china-2025-05-15/ |access-date=2025-06-05 |work=Reuters |language=en}}</ref>
|-
|[[One Big Beautiful Bill Act]]
|-
|Amazon is training humanoid robots to deliver packages.<ref>{{Cite news |date=2025-06-05 |title=Amazon prepares to test humanoid robots for deliveries, The Information reports |url=https://www.reuters.com/business/retail-consumer/amazon-prepares-test-humanoid-robots-package-deliveries-information-reports-2025-06-05/ |access-date=2025-06-05 |work=Reuters |language=en}}</ref>
|-
|[[NLWeb]], [[Project Mariner]], and [[Google Flow]] launch.
|-
|On February 10 and 11, [[France]] hosts the [[Artificial Intelligence Action Summit]].<ref>{{Cite news |url=https://www.rfi.fr/en/science-and-technology/20250206-weds-pm-paris-ai-summit-what-french-developers-expect-etcy|title=Paris hosts AI summit, with spotlight on innovation, regulation, creativity
|date=6 February 2025|access-date=11 February 2025|work=[[Radio France Internationale]]}}</ref> 61 countries, including [[China]], [[India]], [[Japan]], France and [[Canada]], sign a declaration on "inclusive and sustainable" AI,<ref>{{Cite news|last=Dillet|first=Romain|date=11 February 2025|title=As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing 'open, inclusive, ethical' AI.|url=https://techcrunch.com/2025/02/11/as-us-and-uk-refuse-to-sign-ai-action-summit-statement-countries-fail-to-agree-on-the-basics/|access-date=11 February 2025|work=[[TechCrunch]]}}</ref> which the UK and US refused to sign.<ref>{{Cite news |last=Milmo |first=Dan |date=11 February 2025 |title=US and UK refuse to sign Paris summit declaration on 'inclusive' AI |url=https://www.theguardian.com/technology/2025/feb/11/us-uk-paris-ai-summit-artificial-intelligence-declaration |access-date=2025-02-11 |work=[[The Guardian]]}}</ref>
|}
==See also==
* [[Timeline of machine translation]]
* [[Timeline of machine learning]]
==Notes==
{{notelist}}
==References==
{{Reflist}}
==Sources==
* {{Citation | first = Bruce G. | last = Buchanan | year = 2005 | title = A (Very) Brief History of Artificial Intelligence | magazine = AI Magazine<!-- WINTER --> | pages = 53–60 | url = http://www.aaai.org/AITopics/assets/PDF/AIMag26-04-016.pdf | access-date = 30 August 2007 | url-status = dead | archive-url = https://web.archive.org/web/20070926023314/http://www.aaai.org/AITopics/assets/PDF/AIMag26-04-016.pdf | archive-date = 26 September 2007 | df = dmy-all }}
* {{Cite book
| last = Christian | first = Brian | author-link = Brian Christian
| title = [[The Alignment Problem]]: Machine learning and human values
| publisher = W. W. Norton & Company
| year = 2020
| isbn = 978-0-393-86833-3 |oclc=1233266753
}}
* {{Crevier 1993}}
* {{Cite web
| last1 = Linsky | first1 = Bernard
| last2 = Irvine | first2 = Andrew David | author2-link = Andrew David Irvine
| date = Spring 2022
| title = Principia Mathematica
| website = The Stanford Encyclopedia of Philosophy
| editor = Edward N. Zalta
| url = https://plato.stanford.edu/archives/spr2022/entries/principia-mathematica
}}
* {{Citation
| last = McCorduck | first = Pamela | author-link = Pamela McCorduck
| year = 2004
| title = Machines Who Think
| publisher=A. K. Peters, Ltd. | ___location=Natick, MA
| edition=2nd
| isbn=978-1-56881-205-2
}}
* {{cite book |last=Needham |first=Joseph |author-link = Joseph Needham |date=1986 |title=Science and Civilization in China: Volume 2 |___location=Taipei |publisher=Caves Books Ltd}}
* {{Cite book
| first1 = Stuart J. | last1 = Russell | author1-link = Stuart J. Russell
| first2 = Peter. | last2 = Norvig | author2-link = Peter Norvig
| title=[[Artificial Intelligence: A Modern Approach]]
| year = 2021
| edition = 4th
| isbn = 978-0134610993
| lccn = 20190474
| publisher = Pearson | ___location = Hoboken
}}
* {{Citation | doi = 10.1147/rd.33.0210 | last = Samuel | first = Arthur L. | date = July 1959 | title = Some studies in machine learning using the game of checkers | journal = IBM Journal of Research and Development | volume = 3 | issue = 3 | pages = 210–219 | author-link = Arthur Samuel (computer scientist) | url = http://domino.research.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74?OpenDocument | access-date = 20 August 2007 | citeseerx = 10.1.1.368.2254 | s2cid = 2126705 | archive-date = 3 March 2016 | archive-url = https://web.archive.org/web/20160303191010/http://domino.research.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74?OpenDocument | url-status = dead }}
* {{cite web
| title = Annotated History of Modern AI and Deep Learning
| last = Schmidhuber | first = Jürgen | author-link = Jürgen Schmidhuber
| year = 2022
| url = https://people.idsia.ch/~juergen/
}}
* {{citation
| first = Matteo | last = Wong
| title = ChatGPT Is Already Obsolete
| date = 19 May 2023
| magazine = The Atlantic
| url = https://www.theatlantic.com/technology/archive/2023/05/ai-advancements-multimodal-models/674113/
}}
==Further reading==
<!-- These references are almost entirely unused. I am still researching what happened or why they are here -->
* {{Citation | first = David | last = Berlinski | year = 2000 | title =The Advent of the Algorithm| publisher = Harcourt Books |author-link=David Berlinski }}
* {{Citation | first = Rodney | last = Brooks | title = Elephants Don't Play Chess | journal = Robotics and Autonomous Systems | volume=6 | issue = 1–2 | year =1990 | pages = 3–15 | author-link=Rodney Brooks | url=http://people.csail.mit.edu/brooks/papers/elephants.pdf | access-date=30 August 2007 | doi = 10.1016/S0921-8890(05)80025-9| citeseerx = 10.1.1.588.7539 }}
* {{Citation | first = Brad | last = Darrach | title=Meet Shakey, the First Electronic Person | magazine=Life Magazine| date=20 November 1970 | pages = 58–68 }}
* {{Citation | first = J. | last = Doyle | year = 1983 | title = What is rational psychology? Toward a modern mental philosophy | magazine = AI Magazine | volume= 4 | issue =3 |pages = 50–53 }}
* {{Citation | first = Hubert | last = Dreyfus | title =
* {{Citation |
* {{Citation |editor1-last=Feigenbaum |editor1-first=Edward |editor2-last=Feldman |editor2-first=Julian |editor1-link=Edward Feigenbaum |title=Computers and thought |date=1963 |publisher=McGraw-Hill |___location=New York |edition=1 |oclc=593742426}}
* {{Citation
* {{Citation | first =
* {{Citation | first =
* {{Citation | first1 = Andreas | last1 = Kaplan | first2 = Michael | last2 = Haenlein | title = Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence | date = 2018 | doi=10.1016/j.bushor.2018.08.004 | volume=62 | journal=Business Horizons | pages=15–25| s2cid = 158433736 }}
* {{Citation |
* {{Citation | first =
* {{Citation | last1=Lenat | first1=Douglas | last2=Guha | first2=R. V.| year = 1989 | title = Building Large Knowledge-Based Systems | publisher = Addison-Wesley| author-link=Douglas Lenat }}
* {{Citation | first = Gerald M. | last = Levitt | title = The Turk, Chess Automaton| publisher = McFarland|year = 2000| isbn = 978-0-7864-0778-1|___location = Jefferson, N.C. }}
* {{Citation | last = Lighthill | first = Professor Sir James | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council|author-link=James Lighthill }}
* {{Citation | last = Lucas | first = John | year = 1961 | url = http://users.ox.ac.uk/~jrlucas/Godel/mmg.html | title = Minds, Machines and Gödel | author-link = John Lucas (philosopher) | access-date = 24 July 2007 | archive-date = 19 August 2007 | archive-url = https://web.archive.org/web/20070819165214/http://users.ox.ac.uk/~jrlucas/Godel/mmg.html | url-status = dead }}
* {{Citation |
* {{Citation |
* {{Citation | last1 = McCullough | first1 = W. S. | last2 = Pitts | first2 = W. | year = 1943 | title = A logical calculus of the ideas immanent in nervous activity | journal= Bulletin of Mathematical Biophysics | volume= 5 | issue = 4 | pages = 115–127 | author-link = Warren McCullough | doi = 10.1007/BF02478259 | author-link2 = Walter Pitts}}
* {{Citation
* {{Citation | first = Marvin | last = Minsky |
* {{Citation | last = Minsky | first = Marvin | year = 1974 | title = A Framework for Representing Knowledge | url = http://web.media.mit.edu/~minsky/papers/Frames/frames.html | author-link = Marvin Minsky | access-date = 27 December 2007 | archive-date = 7 January 2021 | archive-url = https://web.archive.org/web/20210107162402/http://web.media.mit.edu/~minsky/papers/Frames/frames.html | url-status = dead }}
* {{Citation |
* {{Citation | first = Hans | last = Moravec | year = 1976 | url= http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | author-link=Hans Moravec }}
* {{Citation | first = Hans | last = Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec }}
* {{Citation | last = United States National Research Council |chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999| author-link=United States National Research Council |
* {{Citation |
* {{Citation | last =
* {{Citation | last = Pearl | first = J. | year = 1988 | title = Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference | publisher=Morgan Kaufmann | author-link=Judea Pearl | ___location=San Mateo, California}}
* {{Russell Norvig 2003}}
* {{Citation |
* {{Citation | doi = 10.
* {{Citation | last1 =Simon | year = 1958 | first1 = H. A. | last2= Newell | first2 = Allen | title = Heuristic Problem Solving: The Next Advance in Operations Research | journal =Operations Research| volume=6 | author-link=Herbert A. Simon | issue =1 |author2-link=Allen Newell | doi =10.1287/opre.6.1.1 | page =1 }}
* {{Citation | first = H. A. | last= Simon| year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row |
* {{Citation | last = Turing | first = Alan | author-link = Alan Turing | title=On Computable Numbers, with an Application to the Entscheidungsproblem | journal=Proceedings of the London Mathematical Society | series=2 | issue = 42 | date=
* {{Citation | last = Turing | first = Alan | author-link = Alan Turing | title = Computing machinery and intelligence | journal = Mind | volume = LIX | issue = 236 | date = October 1950 | pages = 433–60 | url = http://loebner.net/Prizef/TuringArticle.html | doi = 10.1093/mind/LIX.236.433 | url-status = dead | archive-url = https://web.archive.org/web/20080702224846/http://loebner.net/Prizef/TuringArticle.html | archive-date = 2 July 2008 | df = dmy-all }}
* {{Citation | first = Joseph | last = Weizenbaum | title = Computer Power and Human Reason | publisher = W.H. Freeman & Company | year = 1976 |author-link=Joseph Weizenbaum }}
==External links==
* {{citation |url=https://www.techtarget.com/searchenterpriseai/tip/The-history-of-artificial-intelligence-Complete-AI-timeline |title=The history of artificial intelligence: Complete AI timeline |work= Enterprise AI |date=16 Aug 2023 |publisher=TechTarget }}
* {{citation |url=http://aitopics.org/misc/brief-history |title=Brief History (timeline) |work= AI Topics |publisher=Association for the Advancement of Artificial Intelligence }}
{{Timelines of computing}}
{{DEFAULTSORT:Timeline Of Artificial Intelligence}}
[[Category:Computing timelines|Artificial intelligence]]
[[Category:History of artificial intelligence|
[[Category:Contemporary history]]
|