Content deleted Content added
No edit summary Tags: Reverted Mobile edit Mobile web edit |
Weas3l5491 (talk | contribs) mNo edit summary |
||
(9 intermediate revisions by 9 users not shown) | |||
Line 1:
{{Short description|
▲ed only on their writings in a [[second language]] (L2).<ref>Wong, Sze-Meng Jojo, and Mark Dras. [http://anthology.aclweb.org/D/D11/D11-1148.pdf "Exploiting parse structures for native language identification"]. Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2011.</ref> NLI works through identifying language-usage patterns that are common to specific L1 groups and then applying this knowledge to predict the native language of previously unseen texts. This is motivated in part by applications in [[second-language acquisition]], language teaching and [[forensic linguistics]], among all people stuff is sometimes important it helps things like fishing and hunting
== Overview ==
NLI works under the assumption that an author's L1 will dispose them towards particular language production patterns in their L2, as influenced by their native language. This relates to cross-linguistic influence (CLI), a key topic in the field of second-language acquisition (SLA) that analyzes transfer effects from the L1 on later learned languages.
Using large-scale English data, NLI methods achieve over 80% accuracy in predicting the native language of texts written by authors from 11 different L1 backgrounds.<ref>Shervin Malmasi, Keelan Evanini, Aoife Cahill, Joel Tetreault, Robert Pugh, Christopher Hamill, Diane Napolitano, and Yao Qian. 2017. [https://aclanthology.org/W17-5007/ "A Report on the 2017 Native Language Identification Shared Task"]. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 62–75, Copenhagen, Denmark. Association for Computational Linguistics.</ref> This can be compared to a baseline of 9% for choosing randomly.
==Applications==
Line 23 ⟶ 20:
[[Natural language processing]] methods are used to extract and identify language usage patterns common to speakers of an L1-group. This is done using language learner data, usually from a [[learner corpus]]. Next, [[machine learning]] is applied to train classifiers, like [[support vector machine]]s, for predicting the L1 of unseen texts.<ref>Tetreault et al, [http://anthology.aclweb.org/C/C12/C12-1158.pdf "Native Tongues, Lost and Found: Resources and Empirical Evaluations in Native Language Identification"], In Proc. International Conf. on Computational Linguistics (COLING), 2012</ref>
A range of ensemble based systems have also been applied to the task and shown to improve performance over single classifier systems.<ref>Malmasi, Shervin, Sze-Meng Jojo Wong, and Mark Dras. [http://anthology.aclweb.org/W/W13/W13-1716.pdf "NLI Shared Task 2013: MQ submission"]. Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. 2013.</ref><ref>Habic, Vuk, Semenov, Alexander, and Pasiliao, Eduardo. [https://www.sciencedirect.com/science/article/abs/pii/S0950705120305694 "Multitask deep learning for native language identification"] in Knowledge-Based Systems, 2020</ref>
Various linguistic feature types have been applied for this task. These include syntactic features such as constituent parses, grammatical dependencies and part-of-speech tags.
Surface level lexical features such as character, word and lemma [[n-gram
== 2013 shared task ==
Line 33 ⟶ 30:
==See also==
{{div col|colwidth=22em}}
*
*
*
*
*
*
{{div col end}}
|