Computing Machinery and Intelligence: Difference between revisions

Content deleted Content added
Cewbot (talk | contribs)
m Fixing broken anchor: Reminder of an inactive anchor: Imitation Game
Tag: Reverted
Self-reverting 1295922803: Correcting own error, accidentally edited from and old version of the page.
 
(10 intermediate revisions by 9 users not shown)
Line 4:
"'''Computing Machinery and Intelligence'''" is a seminal paper written by [[Alan Turing]] on the topic of [[artificial intelligence]]. The paper, published in 1950 in ''[[Mind (journal)|Mind]]'', was the first to introduce his concept of what is now known as the [[Turing test]] to the general public.
 
Turing's paper considers the question "Can machines think?" Turing says that since the words "think" and "machine" cannot be clearly be defined, we should "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."<ref>{{Harvnb|Turing|1950|p=433}}</ref> To do this, he must first find a simple and unambiguous idea to replace the word "think", second he must explain exactly which "machines" he is considering, and finally, armed with these tools, he formulates a new question, related to the first, that he believes he can answer in the affirmative.
 
==Turing's test==
{{Main|Turing test}}
[[File:Turing Test version 3.png|thumb|The "standard interpretation" of the Turing Test, in which the interrogator is tasked with trying to determine which player is a computer and which is a human]]
{{Main|Turing test}}
 
Rather than trying to determine if a machine is thinking, Turing suggests we should ask if the machine can win a game, called the "[[Turing test#Imitation game|Imitation Game]]{{Broken anchor|date=2024-11-07|bot=User:Cewbot/log/20201008/configuration|target_link=Turing test#Imitation game|reason= The anchor (Imitation game) [[Special:Diff/1169689313|has been deleted]].}}". The original Imitation game, that Turing described, is a simple party game involving three players. Player A is a man, player B is a woman and player C (who plays the role of the interrogator) can be of either sex. In the Imitation Game, player C is unable to see either player A or player B (and knows them only as X and Y), and can communicate with them only through written notes or any other form that does not give away any details about their gender. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one.<ref>{{Citation |last1=Oppy |first1=Graham |title=The Turing Test |date=2021 |url=https://plato.stanford.edu/archives/win2021/entriesuringentries/turing-test/ |encyclopedia=The Stanford Encyclopedia of Philosophy |editor-last=Zalta |editor-first=Edward N. |access-date=2023-08-06 |edition=Winter 2021 |publisher=Metaphysics Research Lab, Stanford University |last2=Dowe |first2=David }}{{Dead link|date=December 2023 |bot=InternetArchiveBot |fix-attempted=yes }}</ref>
Turing proposes a variation of this game that involves the computer: {{' "}}What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?{{" '}}.<ref>{{Harvnb|Turing|1950|p=434}}</ref>
So the modified game becomes one that involves three participants in isolated rooms: a computer (which is being tested), a human, and a (human) judge. The human judge can converse with both the human and the computer by typing into a terminal. Both the computer and human try to convince the judge that they are the human. If the judge cannot consistently tell which is which, then the computer wins the game.<ref>This describes the simplest version of the test. For a more detailed discussion, see [[Turing test#Versions|Versions of the Turing test]].</ref>
 
{{Blockquote|text=We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"}}
Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence ([[Artificial intelligence|AI]]) research in 1956.<ref>The [[Dartmouth conference|Dartmouth conferences]] of 1956 are widely considered the "birth of AI". {{Harv|Crevier|1993|p=49}}</ref> It was a common topic among the members of the [[Ratio Club]], an informal group of British [[cybernetics]] and [[electronics]] researchers that included Alan Turing. Turing, in particular, had been running the notion of machine intelligence since at least 1941 and one of the earliest-known mentions of "computer intelligence" was made by him in 1947.<ref>[[#{{harvid|Turing|1948}}|"Intelligent Machinery" (1948)]] was not published by Turing, and did not see publication until 1968 in:
 
So the modified game becomes one that involves three participants in isolated rooms: a computer (which is being tested), a human, and a (human) judge. The human judge can converse with both the human and the computer by typing into a terminal. Both the computer and the human try to convince the judge that they are the human. If the judge cannot consistently tell which is which, then the computer wins the game.<ref>This describes the simplest version of the test. For a more detailed discussion, see [[Turing test#Versions|Versions of the Turing test]].</ref>
 
Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence ([[Artificial intelligence|AI]]) research in 1956.<ref>The [[Dartmouth conferenceworkshop|Dartmouth conferencesworkshop]] of 1956 areis widely considered the "birth of AI". {{Harv|Crevier|1993|p=49}}</ref> It was a common topic among the members of the [[Ratio Club]], an informal group of British [[cybernetics]] and [[electronics]] researchers that included Alan Turing. Turing, in particular, had been running the notion of machine intelligence since at least 1941 and one of the earliest-known mentions of "computer intelligence" was made by him in 1947.<ref>[[#{{harvid|Turing|1948}}|"Intelligent Machinery" (1948)]] was not published by Turing, and did not see publication until 1968 in:
 
* {{Citation |last1=Evans |first1=A. D. J. |title=Cybernetics: Key Papers |year=1968 |publisher=University Park Press |last2=Robertson}}</ref>
 
As [[Stevan Harnad]] notes,<ref>{{Citation |chapter-url=http://eprints.ecs.soton.ac.uk/12954/ |first=Stevan |last=Harnad |year=2008 |chapter=The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence |editor1-last=Epstein |editor1-first=Robert |editor2-last=Peters |editor2-first=Grace |title=The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer |publisher=Kluwer }}</ref> the question has become "Can machines do what we (as thinking entities) can do?" In other words, Turing is no longer asking whether a machine can "think"; he is asking whether a machine can ''act'' indistinguishably<ref>{{Citation |url=http://cogprints.org/2615/ |first=Stevan |last=Harnad |year=2001 |title=Minds, Machines, and Turing: The Indistinguishability of Indistinguishables |journal=Journal of Logic, Language and Information |volume=9 |issue=4 |pages=425–445 |postscript=. |doi=10.1023/A:1008315308862 |s2cid=1911720 |url-access=subscription }}</ref> from the way a thinker acts. This question avoids the difficult philosophical problem of pre-defining the verb "to think" and focuses instead on the performance capacities that being able to think makes possible, and how a causal system can generate them.
 
Since Turing introduced his test, it has been both highly influential and widely criticised, and has become an important concept in the [[philosophy of artificial intelligence]].<ref>{{cite conference |last=Swiechowski |first=Maciej |year=2020 |title=Game AI Competitions: Motivation for the Imitation Game-Playing Competition |url=https://annals-csis.org/proceedings/2020/pliks/126.pdf |publisher=IEEE Publishing |pages=155–160 |doi=10.15439/2020F126 |isbn=978-83-955416-7-4 |archive-url=https://web.archive.org/web/20210126184536/https://annals-csis.org/proceedings/2020/pliks/126.pdf |archive-date=26 January 2021 |access-date=8 September 2020 |ref=ieee_fedcsis |doi-access=free |book-title=Proceedings of the 2020 Federated Conference on Computer Science and Information Systems |s2cid=222296354 |url-status=live}}</ref><ref>{{Citation |last=Withers |first=Steven |title=Flirty Bot Passes for Human |date=11 December 2007 |url=http://www.itwire.com/your-it-news/home-it/15748-flirty-bot-passes-for-human |work=iTWire |access-date=10 February 2010 |archive-url=https://web.archive.org/web/20171004140133/https://www.itwire.com/your-it-news/home-it/15748-flirty-bot-passes-for-human |url-status=live |archive-date=4 October 2017}}</ref> Some of its criticisms, such as [[John Searle]]'s [[Chinese room]], are themselves controversial.<ref>{{Citation |last=Williams |first=Ian |title=Online Love Seerkers Warned Flirt Bots |date=10 December 2007 |url=http://www.v3.co.uk/vnunet/news/2205441/online-love-seekers-warned-flirt-bots |work=V3 |access-date=10 February 2010 |archive-url=https://web.archive.org/web/20100424101329/http://www.v3.co.uk/vnunet/news/2205441/online-love-seekers-warned-flirt-bots |url-status=live |archive-date=24 April 2010}}</ref><ref name="fortune lambda">{{cite news |author=Jeremy Kahn |date=June 13, 2022 |title=A.I. experts say the Google researcher's claim that his chatbot became 'sentient' is ridiculous—but also highlights big problems in the field |work=Fortune |url=https://fortune.com/2022/06/13/google-ai-researchers-sentient-chatbot-claims-ridiculed-by-experts/ |url-status=live |access-date=13 June 2022 |archive-url=https://web.archive.org/web/20220613132958/https://fortune.com/2022/06/13/google-ai-researchers-sentient-chatbot-claims-ridiculed-by-experts/ |archive-date=13 June 2022}}</ref> Some have taken Turing's question to have been "Can a computer, communicating over a teleprinter, fool a person into believing it is human?"<ref name="NMR">Wardrip-Fruin, Noah and Nick Montfort, ed (2003). The New Media Reader. The MIT Press. {{ISBN|0-262-23227-8}}.</ref> but it seems clear that Turing was not talking about fooling people but about generating human cognitive capacity.<ref>{{Citation |url=http://cogprints.org/1584/ |first=Stevan |last=Harnad |title=The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion |journal=ACM SIGART Bulletin |volume=3 |issue=4 |year=1992 |pages=9–10 |postscript=. |doi=10.1145/141420.141422 |s2cid=36356326 |url-access=subscription }}</ref>
 
==Digital machines==
Line 43 ⟶ 46:
#''[[Religious]] Objection'': This states that thinking is a function of man's [[Immortality|immortal]] [[soul]]; therefore, a machine cannot think. "In attempting to construct such machines," wrote Turing, "we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates."
#'' 'Heads in the Sand' Objection'': "The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so." This thinking is popular among intellectual people, as they believe superiority derives from higher intelligence and [[Existential risk from artificial general intelligence|the possibility of being overtaken is a threat]] (as machines have efficient memory capacities and processing speed, machines exceeding the learning and knowledge capabilities are highly probable). This objection is a fallacious [[appeal to consequences]], confusing what should not be with what can or cannot be (Wardrip-Fruin, 56).
#''The [[Mathematics|Mathematical]] Objection'': This objection uses mathematical theorems, such as [[Gödel's incompleteness theorem]], to show that there are limits to what questions a computer system based on [[logic]] can answer. Turing suggests that humans are too often wrong themselves and pleased at the fallibility of a machine. (This argument would be made again by philosopher [[John Lucas (philosopher)|John Lucas]] in 1961 and [[physicist]] [[Roger Penrose]] in 1989, and later would be called [[Penrose–Lucas argument]].)<ref>{{Harvnb|Lucas|1961}}, {{Harvnb|Penrose|1989}}, {{Harvnb|Hofstadter|1979|pp=471–473,476–477}} and {{Harvnb|Russell|Norvig|2003|pp=949–950}}. Russell and Norvig identify Lucas and Penrose's arguments as being the same one answered by Turing.</ref>
#''Argument From [[Consciousness]]'': This argument, suggested by Professor [[Geoffrey Jefferson]] in his 1949 [[Lister Medal|Lister Oration]] (acceptance speech for his 1948 award of Lister Medal<ref>{{Cite journal |year=1948 |title=Announcements |journal=Nature |volume=162 |issue=4108 |pages=138 |bibcode=1948Natur.162U.138. |doi=10.1038/162138e0 |doi-access=free}}</ref>) states that "not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain."<ref>{{Cite journal |last=Jefferson |first=Geoffrey |date=1949-06-25 |title=The Mind of Mechanical Man |journal=British Medical Journal |volume=1 |issue=4616 |pages=1105–1110 |doi=10.1136/bmj.1.4616.1105 |issn=0007-1447 |pmc=2050428 |pmid=18153422}}</ref> Turing replies by saying that we have no way of knowing that any individual other than ourselves experiences emotions, and that therefore we should accept the test. He adds, "I do not wish to give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." (This argument, that a computer can't have ''conscious experiences'' or ''understanding'', would be made in 1980 by philosopher [[John Searle]] in his [[Chinese room]] argument. Turing's reply is now known as the "[[problem of other minds|other minds]] reply". See also [[Philosophy of artificial intelligence#Can a machine have a mind, consciousness and mental states?|Can a machine have a mind?]] in the [[philosophy of AI]].)<ref>{{Harvnb|Searle|1980}} and {{Harvnb|Russell|Norvig|2003|pp=958–960}}, who identify Searle's argument with the one Turing answers.</ref>
#''Arguments from various disabilities''. These arguments all have the form "a computer will never do ''X''". Turing offers a selection:<blockquote>Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.</blockquote>Turing notes that "no support is usually offered for these statements," and that they depend on naive assumptions about how versatile machines may be in the future, or are "disguised forms of the argument from consciousness." He chooses to answer a few of them:
Line 52 ⟶ 55:
#''Argument from continuity in the nervous system'': Modern [[neurological]] research has shown that the brain is not digital. Even though [[neuron]]s fire in an all-or-nothing pulse, both the exact timing of the pulse and the probability of the pulse occurring have analog components. Turing acknowledges this, but argues that any analog system can be simulated to a reasonable degree of accuracy given enough computing power. ([[Philosopher]] [[Hubert Dreyfus]] would make this argument against "the biological assumption" in 1972.)<ref>{{Harvnb|Dreyfus|1979|p=156}}</ref>
#''Argument from the informality of behaviour'': This argument states that any system governed by laws will be predictable and therefore not truly intelligent. Turing replies by stating that this is confusing laws of behaviour with general rules of conduct, and that if on a broad enough scale (such as is evident in man) machine behaviour would become increasingly difficult to predict. He argues that, just because we can't immediately see what the laws are, does not mean that no such laws exist. He writes "we certainly know of no circumstances under which we could say, 'we have searched enough. There are no such laws.'". ([[Hubert Dreyfus]] would argue in 1972 that human reason and problem solving was not based on formal rules, but instead relied on instincts and awareness that would never be captured in rules. More recent AI research in [[robotics]] and [[computational intelligence]] attempts to find the complex rules that govern our "informal" and unconscious skills of perception, mobility and pattern matching. See [[Dreyfus' critique of AI]]).<ref>{{Harvnb|Dreyfus|1972}}, {{Harvnb|Dreyfus|Dreyfus|1986}}, {{Harvnb|Moravec|1988}} and {{Harvnb|Russell|Norvig|2003|pp=51–52}}, who identify Dreyfus' argument with the one Turing answers.</ref> This rejoinder also includes the [[Turing's Wager]] argument.
#''[[Extra-sensory perception]]'': In 1950, extra-sensory perception was an active area of research and Turing chooses to give ESP the benefit of the doubt, arguing that conditions could be created in which [[Telepathy|mind-reading]] would not affect the test. Turing admitted to "overwhelming statistical evidence" for telepathy, likely referring to early 1940s experiments by [[Samuel Soal]], a member of the [[Society for Psychical Research]].<ref>{{Citation |last=Leavitt |first=David |title=Turing and the paranormal |date=2017-01-26 |url=https://academic.oup.com/book/40646/chapter/348321617 |work=The Turing Guide |access-date=2023-07-23 |publisher=Oxford University Press |language=en |doi=10.1093/oso/9780198747826.003.0042 |isbn=978-0-19-874782-6|url-access=subscription }}</ref>
 
==Learning machines==