Lesk algorithm: Difference between revisions

Content deleted Content added
Line 7:
The Lesk algorithm is based on the assumption that words in a given neighborhood will tend to share a common topic. A simplified version of the Lesk algorithm is to compare the dictionary definition of an ambiguous word with the terms contained of the neighborhood. Versions have been adapted to [[WordNet]].<ref>Satanjeev Banerjee and Ted Pedersen. ''[http://www.cs.cmu.edu/~banerjee/Publications/cicling2002.ps.gz An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet]'', Lecture Notes In Computer Science; Vol. 2276, Pages: 136 - 145, 2002. ISBN 3-540-43219-1
</ref> It would be like this:
# for every sense of the word being disambiguated one should count the amount of words that are in both neighbourhoodneighborhood of that word and in the definition of each sense in a dictionary
# the sense that is to be chosen is the sense which has the biggest number of this count