Turing test: Difference between revisions

Content deleted Content added
Yannn11 (talk | contribs)
Undid revision 1056873305 by 2603:9001:570C:9200:5DC9:9498:1DBF:28EF (talk) possible vandalism
Fix link
Line 47:
 
===ELIZA and PARRY===
In 1966, [[Joseph Weizenbaum]] created a program which appeared to pass the Turing test. The program, known as [[ELIZA]], worked by examining a user's typed comments for keywords. If a keyword is found, a rule that transforms the user's comments is applied, and the resulting sentence is returned. If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments.{{sfn|Weizenbaum|1966|p=37}} In addition, Weizenbaum developed ELIZA to replicate the behaviour of a [[person-centered psychotherapy|Rogerian psychotherapist]], allowing ELIZA to be "free to assume the pose of knowing almost nothing of the real world."{{sfn|Weizenbaum|1966|p=42}} With these techniques, Weizenbaum's program was able to fool some people into believing that they were talking to a real person, with some subjects being "very hard to convince that ELIZA [...] is ''not'' human."{{sfn|Weizenbaum|1966|p=42}} Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing test,{{sfn|Weizenbaum|1966|p=42}}{{sfn|Thomas|1995|p=112}} even though this view is highly contentious (see [[Turing test#Naïveté of interrogators and the anthropomorphic fallacy|below]]).
 
[[Kenneth Colby]] created [[PARRY]] in 1972, a program described as "ELIZA with attitude".{{sfn|Bowden|2006|p=370}} It attempted to model the behaviour of a [[paranoia|paranoid]] [[schizophrenic]], using a similar (if more advanced) approach to that employed by Weizenbaum. To validate the work, PARRY was tested in the early 1970s using a variation of the Turing test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through [[teleprinter]]s. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs.{{sfn|Colby|Hilf|Weber|Kraemer|1972|p=220}} The psychiatrists were able to make the correct identification only 52 percent of the time – a figure consistent with random guessing.{{sfn|Colby|Hilf|Weber|Kraemer|1972|p=220}}