Lesk algorithm: Difference between revisions

Content deleted Content added
Overview: Clarify sentance
Line 7:
The Lesk algorithm is based on the assumption that words in a given "neighborhood" (section of text) will tend to share a common topic. A simplified version of the Lesk algorithm is to compare the dictionary definition of an ambiguous word with the terms contained in its neighborhood. Versions have been adapted to use [[WordNet]].<ref>Satanjeev Banerjee and Ted Pedersen. ''[http://www.cs.cmu.edu/~banerjee/Publications/cicling2002.ps.gz An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet]'', Lecture Notes In Computer Science; Vol. 2276, Pages: 136 - 145, 2002. ISBN 3-540-43219-1
</ref> An implementation might look like this:
# for every sense of the word being disambiguated one should count the amount of words that are in both neighborhood of that word and in the dictionary definition of eachthat sense in a dictionary
# the sense that is to be chosen is the sense which has the biggest number of this count