Content deleted Content added
←Created page with 'The '''history of Natural language processing''' describes the advances of Natural language processing There is some overlap with the [[history of machine tran...' |
No edit summary |
||
Line 1:
The '''history of Natural language processing''' describes the advances of [[Natural language processing]]
There is some overlap with the [[history of machine translation]], and the [[history of artificial intelligence]].
==Early history==
The history of machine translation dates back to the seventeenth century, when philosophers such as [[Leibniz]] and [[Descartes]] put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.
The first patents for "translating machines" were applied for in the mid 1930s. One proposal, by [[Georges Artsrouni]] was simply an automatic bilingual dictionary using [[paper tape]]. The other proposal, by [[Peter Troyanskii]], a [[Russians|Russian]], was more detailed. It included both the bilingual dictionary, and a method for dealing with grammatical roles between languages, based on [[Esperanto]].
In 1950, [[Alan Turing]] published his famous article "Computing Machinery and Intelligence"<ref>{{Harv|Turing|1950}}</ref> which proposed what is now called the [[Turing test]] as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably - on the basis of the conversational content alone - between the program and a real human.
==Implementations==
*The [[Georgetown-IBM experiment|Georgetown experiment]] in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.<REF>Hutchins, J. (2005)</REF>
Line 9 ⟶ 15:
An early success was [[Daniel Bobrow]]'s program [[STUDENT (computer program)|STUDENT]], which could solve high school algebra word problems.<ref>{{Harvnb|McCorduck|2004|p=286}}, {{Harvnb|Crevier|1993|pp=76−79}}, {{Harvnb|Russell|Norvig|2003|p=19}}</ref>
However, the real progress was much slower, and after the [[ALPAC|ALPAC report]] in 1966, which found that ten years long research had failed to fulfill the expectations, the funding was dramatically reduced.
Line 15 ⟶ 20:
*[[ELIZA]] was a simulation of a [[Rogerian psychotherapy|Rogerian psychotherapist]], was written by [[Joseph Weizenbaum]] between 1964 to 1966.
Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?". ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a [[canned response]] or repeated back what was said to her, rephrasing her response with a few grammar rules.<ref>{{Harvnb|McCorduck|2004|pp=291–296}}, {{Harvnb|Crevier|1993|pp=134−139}}</ref>
A [[semantic net]] represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by [[Ross Quillian]]<ref>{{Harvnb|Crevier|1993|pp=79−83}}</ref> and the most successful (and controversial) version was [[Roger Schank]]'s [[Conceptual Dependency]].<ref>{{Harvnb|Crevier|1993|pp=164−172}}</ref>
|