Content deleted Content added
m Dating maintenance tags: {{Update}} |
m Redirect bypass from Georgetown-IBM experiment to Georgetown–IBM experiment using popups | wp:datescript-assisted date/terms audit; see wp:unlinkdates, wp:overlink |
||
Line 6:
The history of machine translation dates back to the seventeenth century, when philosophers such as [[Gottfried Wilhelm Leibniz|Leibniz]] and [[Descartes]] put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.
The first patents for "translating machines" were applied for in the mid-1930s. One proposal, by [[Georges Artsrouni]] was simply an automatic bilingual dictionary using [[paper tape]]. The other proposal, by [[Peter Troyanskii]], a
In 1950, [[Alan Turing]] published his famous article "[[Computing Machinery and Intelligence]]" which proposed what is now called the [[Turing test]] as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably — on the basis of the conversational content alone — between the program and a real human.
In 1957, [[Noam Chomsky]]’s ''[[Syntactic Structures]]'' revolutionized Linguistics with '[[universal grammar]]', a rule
| url = http://www.cs.bham.ac.uk/~pjh/sem1a5/pt1/pt1_history.html
| title = SEM1A5 - Part 1 - A brief history of NLP
Line 16:
}}</ref>
The [[
Some notably successful NLP systems developed in the 1960s were [[SHRDLU]], a natural language system working in restricted "[[blocks world]]s" with restricted vocabularies.
Line 22:
In 1969 [[Roger Schank]] introduced the [[conceptual dependency theory]] for natural language understanding.<ref>[[Roger Schank]], 1969, ''A conceptual dependency parser for natural language'' Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3</ref> This model, partially influenced by the work of [[Sydney Lamb]], was extensively used by Schank's students at [[Yale University]], such as Robert Wilensky, Wendy Lehnert, and [[Janet Kolodner]].
In 1970, William A. Woods introduced the [[augmented transition network]] (ATN) to represent natural language input.<ref>Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606 [http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED037733&ERICExtSearch_SearchType_0=no&accno=ED037733]</ref> Instead of ''[[phrase structure rules]]'' ATNs used an equivalent set of [[finite
{{anchor|Machine learning}}
Up to the 1980s, most NLP systems were based on complex sets of hand-written rules.
Many of the notable early successes occurred in the field of [[machine translation]], due especially to work at IBM Research, where successively more complicated statistical models were developed. These systems were able to take advantage of existing multilingual [[text corpus|textual corpora]] that had been produced by the [[Parliament of Canada]] and the [[European Union]] as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data.
Line 40:
! style="background-color:#EEF6D6;" | Reference
|-
|'''[[
|1954
|[[Georgetown University]] and [[IBM]]
|