History of natural language processing: Difference between revisions

Content deleted Content added
prevent error message
 
(39 intermediate revisions by 29 users not shown)
Line 1:
{{update|date=April 2023}}
The '''history of natural language processing''' describes the advances of [[natural language processing]] ([[Outline of natural language processing]]). There is some overlap with the [[history of machine translation]] and the [[history of artificial intelligence]].
{{Short description|none}}
The '''history of natural language processing''' describes the advances of [[natural language processing]] ([[Outline of natural language processing]]). There is some overlap with the [[history of machine translation]], the [[history of speech recognition]], and the [[history of artificial intelligence]].
 
== ResearchEarly and developmenthistory ==
The history of machine translation dates back to the seventeenth century, when philosophers such as [[Gottfried Wilhelm Leibniz|Leibniz]] and [[Descartes]] put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.
 
The first patents for "translating machines" were applied for in the mid-1930s. One proposal, by [[Georges Artsrouni]] was simply an automatic bilingual dictionary using [[paper tape]]. The other proposal, by [[Peter Troyanskii]], a [[Russians|Russian]], was more detailed. It[[Peter Troyanskii|Troyanski]] proposal included both the bilingual dictionary, and a method for dealing with grammatical roles between languages, based on [[Esperanto]].<ref>{{cite web |author=<!-- not stated --> |date= |title=Georges Artsrouni |url=https://machinetranslate.org/georges-artsrouni |website=machinetranslate.org |___location= |publisher= |access-date=July 10, 2025}}</ref><ref>{{Citation
| last1 = Hutchins
| first1 = John
| last2 = Lovtskii
| first2 = Evgenii
| year = 2000
| title = Petr Petrovich Troyanskii (1894-1950): A Forgotten Pioneer of Mechanical Translation
| publisher =
| publication-place =Machine Translation
| page =
| url =https://www.jstor.org/stable/40009018
| access-date =
}}</ref>
 
== Logical period ==
In 1950, [[Alan Turing]] published his famous article "[[Computing Machinery and Intelligence]]" which proposed what is now called the [[Turing test]] as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably&nbsp;— on the basis of the conversational content alone&nbsp;— between the program and a real human.
 
In 1957, [[Noam Chomsky]]’s ''[[Syntactic Structures]]'' revolutionized Linguistics with '[[universal grammar]]', a rule -based system of syntactic structures.<ref>{{cite web
| url = http://www.cs.bham.ac.uk/~pjh/sem1a5/pt1/pt1_history.html
| title = SEM1A5 - Part 1 - A brief history of NLP
Line 14 ⟶ 29:
}}</ref>
 
The [[Georgetown-IBMGeorgetown–IBM experiment|Georgetown experiment]] in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.<ref>Hutchins, J. (2005)</ref> However, real progress was much slower, and after the [[ALPAC|ALPAC report]] in 1966, which found that ten years long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s, when the first [[statistical machine translation]] systems were developed.
 
Some notably successful NLP systems developed in the 1960s were [[SHRDLU]], a natural language system working in restricted "[[blocks world]]s" with restricted vocabularies, and [[ELIZA]], a simulation of a [[Rogerian psychotherapy|Rogerian psychotherapist]], written by [[Joseph Weizenbaum]] between 1964 to 1966. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?".
 
In 1969 [[Roger Schank]] introduced the [[conceptual dependency theory]] for natural language understanding.<ref>[[Roger Schank]], 1969, ''A conceptual dependency parser for natural language'' Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3</ref> This model, partially influenced by the work of [[Sydney Lamb]], was extensively used by Schank's students at [[Yale University]], such as Robert Wilensky, Wendy Lehnert, and [[Janet Kolodner]].
 
In 1970, William A. Woods introduced the [[augmented transition network]] (ATN) to represent natural language input.<ref>Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606 [http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED037733&ERICExtSearch_SearchType_0=no&accno=ED037733]</ref> Instead of ''[[phrase structure rules]]'' ATNs used an equivalent set of [[finite -state automata]] that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years. During the 70's1970s many programmers began to write 'conceptual ontologies', which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, many [[chatterbots]] were written including [[PARRY]], [[Racter]], and [[Jabberwacky]].
 
== Statistical period ==
Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of [[machine learning]] algorithms for language processing. This was due both to the steady increase in computational power resulting from [[Moore's Law]] and the gradual lessening of the dominance of [[Noam Chomsky|Chomskyan]] theories of linguistics (e.g. [[transformational grammar]]), whose theoretical underpinnings discouraged the sort of [[corpus linguistics]] that underlies the machine-learning approach to language processing.<ref>Chomskyan linguistics encourages the investigation of "[[corner case]]s" that stress the limits of its theoretical models (comparable to [[pathological (mathematics)|pathological]] phenomena in mathematics), typically created using [[thought experiment]]s, rather than the systematic investigation of typical phenomena that occur in real-world data, as is the case in [[corpus linguistics]]. The creation and use of such [[text corpus|corpora]] of real-world data is a fundamental part of machine-learning algorithms for NLP. In addition, theoretical underpinnings of Chomskyan linguistics such as the so-called "[[poverty of the stimulus]]" argument entail that general learning algorithms, as are typically used in machine learning, cannot be successful in language processing. As a result, the Chomskyan paradigm discouraged the application of such models to language processing.</ref> Some of the earliest-used machine learning algorithms, such as [[decision tree]]s, produced systems of hard if-then rules similar to existing hand-written rules. Increasingly, however, research has focused on [[statistical natural language processing|statistical models]], which make soft, [[probabilistic]] decisions based on attaching [[real-valued]] weights to the features making up the input data. The [[cache language model]]s upon which many [[speech recognition]] systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks.
{{anchor|Machine learning}}
Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of [[machine learning]] algorithms for language processing. This was due both to the steady increase in computational power resulting from [[Moore's Lawlaw]] and the gradual lessening of the dominance of [[Noam Chomsky|Chomskyan]] theories of linguistics (e.g. [[transformational grammar]]), whose theoretical underpinnings discouraged the sort of [[corpus linguistics]] that underlies the machine-learning approach to language processing.<ref>Chomskyan linguistics encourages the investigation of "[[corner case]]s" that stress the limits of its theoretical models (comparable to [[pathological (mathematics)|pathological]] phenomena in mathematics), typically created using [[thought experiment]]s, rather than the systematic investigation of typical phenomena that occur in real-world data, as is the case in [[corpus linguistics]]. The creation and use of such [[text corpus|corpora]] of real-world data is a fundamental part of machine-learning algorithms for NLP. In addition, theoretical underpinnings of Chomskyan linguistics such as the so-called "[[poverty of the stimulus]]" argument entail that general learning algorithms, as are typically used in machine learning, cannot be successful in language processing. As a result, the Chomskyan paradigm discouraged the application of such models to language processing.</ref> Some of the earliest-used machine learning algorithms, such as [[decision tree]]s, produced systems of hard if-then rules similar to existing hand-written rules. Increasingly, however, research has focused on [[statistical natural language processing|statistical models]], which make soft, [[probabilistic]] decisions based on attaching [[real-valued]] weights to the features making up the input data. The [[cache language model]]s upon which many [[speech recognition]] systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks.
 
=== Datasets ===
Many of the notable early successes occurred in the field of [[machine translation]], due especially to work at IBM Research, where successively more complicated statistical models were developed. These systems were able to take advantage of existing multilingual [[text corpus|textual corpora]] that had been produced by the [[Parliament of Canada]] and the [[European Union]] as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data.
The emergence of statistical approaches was aided by both increase in computing power and the availability of large datasets. At that time, large multilingual corpora were starting to emerge. Notably, some were produced by the [[Parliament of Canada]] and the [[European Union]] as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government.
 
Many of the notable early successes occurred in the field of [[machine translation]]. In 1993, the [[IBM alignment models]] were used for [[statistical machine translation]].<ref name="U4RiN">{{cite journal |last1=Brown |first1=Peter F. |year=1993 |title=The mathematics of statistical machine translation: Parameter estimation |journal=Computational Linguistics |issue=19 |pages=263–311}}</ref> Compared to previous machine translation systems, which were symbolic systems manually coded by computational linguists, these systems were statistical, which allowed them to automatically learn from large [[text corpus|textual corpora]]. Though these systems do not work well in situations where only small corpora is available, so data-efficient methods continue to be an area of research and development.
 
In 2001, a one-billion-word large text corpus, scraped from the Internet, referred to as "very very large" at the time, was used for word [[Word-sense disambiguation|disambiguation]].<ref name="2001_very_very_large_corpora">{{cite journal |last1=Banko |first1=Michele |last2=Brill |first2=Eric |date=2001 |title=Scaling to very very large corpora for natural language disambiguation |journal=Proceedings of the 39th Annual Meeting on Association for Computational Linguistics - ACL '01 |___location=Morristown, NJ, USA |publisher=Association for Computational Linguistics |pages=26–33 |doi=10.3115/1073012.1073017 |s2cid=6645623 |doi-access=free}}</ref>
 
RecentTo researchtake hasadvantage increasinglyof focusedlarge, onunlabelled datasets, algorithms were developed for [[unsupervisedUnsupervised learning|unsupervised]] and [[semiself-supervised learning|semi-supervised]] learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than [[supervised learning]], and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the [[World Wide Web]]), which can often make up for the inferior results.
 
== Neural period ==
[[File:A_development_of_natural_language_processing_tools.png|thumb|Timeline of natural language processing models]]
Neural [[Language Model|language models]] were developed in 1990s. In 1990, the [[Recurrent neural network#Elman networks and Jordan networks|Elman network]], using a [[recurrent neural network]], encoded each word in a training set as a vector, called a [[word embedding]], and the whole vocabulary as a [[vector database]], allowing it to perform such tasks as sequence-predictions that are beyond the power of a simple [[multilayer perceptron]]. A shortcoming of the static embeddings was that they didn't differentiate between multiple meanings of [[Homonym|homonyms]].<ref name="1990_ElmanPaper">{{cite journal |last=Elman |first=Jeffrey L. |date=March 1990 |title=Finding Structure in Time |url=http://doi.wiley.com/10.1207/s15516709cog1402_1 |journal=Cognitive Science |volume=14 |issue=2 |pages=179–211 |doi=10.1207/s15516709cog1402_1 |s2cid=2763403|url-access=subscription }}</ref>
 
Yoshua Bengio developed the first neural probabilistic language model in 2000 <ref>{{Citation
| last = Bengio
| first = Yoshua
| author-link = Yoshua Bengio
| title = A Neural Probabilistic Language Model
| place = Montreal, Canada
| publisher = Journal of Machine Learning Research
| series = —
| volume = 3
| edition = —
| date = 2003
| page = 1137–1155
| doi = 10.1162/153244303322533223
| doi-access = free
}}</ref>
 
In recent years, advancements in deep learning and large language models have significantly enhanced the capabilities of natural language processing, leading to widespread applications in areas such as healthcare, customer service, and content generation.<ref>{{Cite news |last=Gruetzemacher |first=Ross |date=2022-04-19 |title=The Power of Natural Language Processing |url=https://hbr.org/2022/04/the-power-of-natural-language-processing |access-date=2024-12-07 |work=Harvard Business Review |issn=0017-8012}}</ref>
Recent research has increasingly focused on [[unsupervised learning|unsupervised]] and [[semi-supervised learning|semi-supervised]] learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than [[supervised learning]], and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the [[World Wide Web]]), which can often make up for the inferior results.
 
==Software==
 
{| class="wikitable"
{|
! style="background-color:#ECE9EF;" | Software
! style="background-color:#FFF6D6;" | Year
Line 37 ⟶ 81:
! style="background-color:#EEF6D6;" | Reference
|-
|'''[[Georgetown-IBMGeorgetown–IBM experiment|Georgetown experiment]] '''
|1954
|[[Georgetown University]] and [[IBM]]
Line 131 ⟶ 175:
|
|-
|'''[[MOPTRANS]] '''<ref>{{cite book |last1=Kolodner |first1=Janet L. |last2=RiesbeckKolodner, |first2=Christopher K. |date=2014Riesbeck; |title=''Experience, Memory, and Reasoning''; |___location=New York |publisher=Psychology Press; 2014 (reprint)}}</ref>
|1984
|Lytinen
Line 144 ⟶ 188:
|1987
|Hirst
|
|-
|'''[[Dr. Sbaitso]] '''
|1991
|[[Creative Labs]]
|
|-
Line 150 ⟶ 199:
|[[IBM]]
|A question answering system that won the [[Jeopardy!]] contest, defeating the best human players in February 2011.
|-
|'''[[Siri]] '''
|2011
|[[Apple_Inc.|Apple]]
|A virtual assistant developed by Apple.
|-
|'''[[Cortana (virtual assistant)|Cortana]] '''
|2014
|[[Microsoft]]
|A virtual assistant developed by Microsoft.
|-
|'''[[Amazon Alexa]] '''
|2014
|[[Amazon_(company)|Amazon]]
|A virtual assistant developed by Amazon.
|-
|'''[[Google Assistant]] '''
|2016
|[[Google]]
|A virtual assistant developed by Google.
|
|}
Line 156 ⟶ 225:
{{Reflist}}
 
==Bibliography==
[[Category:History of artificial intelligence]]
* {{Crevier 1993}}
* {{Citation | last=McCorduck | first=Pamela | title = Machines Who Think | year = 2004 | edition=2nd | ___location=Natick, MA | publisher=A. K. Peters, Ltd. | isbn=978-1-56881-205-2 | oclc=52197627}}.
* {{Russell Norvig 2003}}.
 
[[Category:History of artificial intelligence|natural language processing]]
[[Category:Natural language processing]]
[[Category:History of linguistics|natural language processing]]
[[Category:History of software|natural language processing]]
[[Category:Software topical history overviews|natural language processing]]