Lesk algorithm: Difference between revisions

Content deleted Content added
Soshial (talk | contribs)
criticisms added
Line 22:
 
==Criticisms and other Lesk-based methods==
Unfortunately, Lesk’s approach is very sensitive to the exact wording of definitionsdefinitions, so the absence of a certain word can radically change the results. Further, the algorithm determines overlaps only among the glosses of the senses being considered. This is a significant limitation in that dictionary glosses tend to be fairly short and do not provide sufficientsufficient vocabulary to relate fine-grained sense distinctions.
 
Recently, a lot of works appeared which offer different modifications of this algorithm. These works uses other resources for analysis (thesauruses, synonyms dictionaries or morphological and syntaxicalsyntactic models): for instance, it may use such information as synonyms, different derivatives, or words from definitions of words from definitions<ref>Alexander Gelbukh, Grigori Sidorov. Automatic resolution of ambiguity of word senses in dictionary definitions (in Russian). J. Nauchno-Tehnicheskaya Informaciya (NTI), ISSN 0548-0027, ser. 2, N 3, 2004, pp. 10–15.</ref>.
 
There are a lot of studies concerning Lesk and its extensions<ref>Roberto Navigli. [http://www.dsi.uniroma1.it/~navigli/pubs/ACM_Survey_2009_Navigli.pdf ''Word Sense Disambiguation: A Survey]'', ACM Computing Surveys, 41(2), 2009, pp. 1–69.</ref>:
 
There were a lot of studying of this method:
* Kwong, 2001;
* Nastase and Szpakowicz, 2001;