History of natural language processing: Difference between revisions

Content deleted Content added
Research and development: divide into sectons
 
(8 intermediate revisions by 5 users not shown)
Line 6:
The history of machine translation dates back to the seventeenth century, when philosophers such as [[Gottfried Wilhelm Leibniz|Leibniz]] and [[Descartes]] put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.
 
The first patents for "translating machines" were applied for in the mid-1930s. One proposal, by [[Georges Artsrouni]] was simply an automatic bilingual dictionary using [[paper tape]]. The other proposal, by [[Peter Troyanskii]], a Russian, was more detailed. It[[Peter Troyanskii|Troyanski]] proposal included both the bilingual dictionary, and a method for dealing with grammatical roles between languages, based on [[Esperanto]].<ref>{{cite web |author=<!-- not stated --> |date= |title=Georges Artsrouni |url=https://machinetranslate.org/georges-artsrouni |website=machinetranslate.org |___location= |publisher= |access-date=July 10, 2025}}</ref><ref>{{Citation
| last1 = Hutchins
| first1 = John
| last2 = Lovtskii
| first2 = Evgenii
| year = 2000
| title = Petr Petrovich Troyanskii (1894-1950): A Forgotten Pioneer of Mechanical Translation
| publisher =
| publication-place =Machine Translation
| page =
| url =https://www.jstor.org/stable/40009018
| access-date =
}}</ref>
 
== Logical period ==
Line 40 ⟶ 52:
== Neural period ==
[[File:A_development_of_natural_language_processing_tools.png|thumb|Timeline of natural language processing models]]
Neural [[Language Model|language models]] were developed in 1990s. In 1990, the [[Recurrent neural network#Elman networks and Jordan networks|Elman network]], using a [[recurrent neural network]], encoded each word in a training set as a vector, called a [[word embedding]], and the whole vocabulary as a [[vector database]], allowing it to perform such tasks as sequence-predictions that are beyond the power of a simple [[multilayer perceptron]]. A shortcoming of the static embeddings was that they didn't differentiate between multiple meanings of [[Homonym|homonyms]].<ref name="1990_ElmanPaper">{{cite journal |last=Elman |first=Jeffrey L. |date=March 1990 |title=Finding Structure in Time |url=http://doi.wiley.com/10.1207/s15516709cog1402_1 |journal=Cognitive Science |volume=14 |issue=2 |pages=179–211 |doi=10.1207/s15516709cog1402_1 |s2cid=2763403|url-access=subscription }}</ref>
 
Yoshua Bengio developed the first neural probabilistic language model in 2000 <ref>{{Citation
| last = Bengio
| first = Yoshua
| author-link = Yoshua Bengio
| title = A Neural Probabilistic Language Model
| place = Montreal, Canada
| publisher = Journal of Machine Learning Research
| series = —
| volume = 3
| edition = —
| date = 2003
| page = 1137–1155
| doi = 10.1162/153244303322533223
| doi-access = free
}}</ref>
 
In recent years, advancements in deep learning and large language models have significantly enhanced the capabilities of natural language processing, leading to widespread applications in areas such as healthcare, customer service, and content generation.<ref>{{Cite news |last=Gruetzemacher |first=Ross |date=2022-04-19 |title=The Power of Natural Language Processing |url=https://hbr.org/2022/04/the-power-of-natural-language-processing |access-date=2024-12-07 |work=Harvard Business Review |issn=0017-8012}}</ref>
 
==Software==
Line 200 ⟶ 230:
* {{Russell Norvig 2003}}.
 
[[Category:History of artificial intelligence|natural language processing]]
[[Category:Natural language processing]]
[[Category:History of linguistics|natural language processing]]
[[Category:History of software|natural language processing]]
[[Category:Software topical history overviews|natural language processing]]