Vector space model: Difference between revisions

Content deleted Content added
Advantages: simplify and site
use standard variant
 
(14 intermediate revisions by 13 users not shown)
Line 1:
{{Short description|Model for representing text documents}}
'''Vector space model''' or '''term vector model''' is an algebraic model for representing text documents (or more generally, items) as [[vector space|vectors]] such that the distance between vectors represents the relevance between the documents. It is used in [[information filtering]], [[information retrieval]], [[index (search engine)|index]]ing and relevancyrelevance rankings. Its first use was in the [[SMART Information Retrieval System]].<ref>{{cn}}.cite journal
| last1 = Berry | first1 = Michael W.
| last2 = Drmac | first2 = Zlatko
| last3 = Jessup | first3 = Elizabeth R.
| date = January 1999
| doi = 10.1137/s0036144598347035
| issue = 2
| journal = SIAM Review
| pages = 335–362
| title = Matrices, Vector Spaces, and Information Retrieval
| volume = 41}}</ref>
 
==Definitions==
Line 12 ⟶ 22:
The definition of ''term'' depends on the application. Typically terms are single words, [[keyword (linguistics)|keyword]]s, or longer phrases. If words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary (the number of distinct words occurring in the [[text corpus|corpus]]).
 
Vector operations can be used to compare documents with queries.<ref name=":0">{{Cite book |last=Büttcher |first=Stefan |title=Information retrieval: implementing and evaluating search engines |last2=Clarke |first2=Charles L. A. |last3=Cormack |first3=Gordon V. |date=2016 |publisher=The MIT Press |isbn=978-0-262-52887-0 |edition=First MIT Press paperback edition |___location=Cambridge, Massachusetts London, England}}</ref>
 
==Applications==
Line 37 ⟶ 47:
As all vectors under consideration by this model are element-wise nonnegative, a cosine value of zero means that the query and document vector are [[orthogonal]] and have no match (i.e. the query term does not exist in the document being considered). See [[cosine similarity]] for further information.<ref name=":0" />
 
== Term frequency-inversefrequency–inverse document frequency (if–idf) weights==
In the classic vector space model proposed by [[Gerard Salton|Salton]], Wong and Yang ,<ref>[http://doi.acm.org/10.1145/361219.361220 G. Salton , A. Wong , C. S. Yang, A vector space model for automatic indexing], Communications of the ACM, v.18 n.11, p.613–620, Nov. 1975</ref> the term-specific weights in the document vectors are products of local and global parameters. The model is known as [[tf-idf|term frequency-inversefrequency–inverse document frequency]] (if–idf) model. The weight vector for document ''d'' is <math>\mathbf{v}_d = [w_{1,d}, w_{2,d}, \ldots, w_{N,d}]^T</math>, where
 
:<math>
Line 58 ⟶ 68:
The vector space model has the following limitations:
 
#Query terms are assumed to be independent, so phrases might not be represented well in the ranking
#Long documents are poorly represented because they have poor similarity values (a small [[scalar product]] and a [[curse of dimensionality|large dimensionality]])
#Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a<ref name=":0"[[false negative]] match"./>
#Search keywords must precisely match document terms; word [[substring]]s might result in a "[[false positive]] match"
#Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a "[[false negative]] match".
#The order in which the terms appear in the document is lost in the vector space representation.
#Theoretically assumes terms are statistically independent.
#Weighting is intuitive but not very formal.
 
Many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such as [[singular value decomposition]] and [[lexical database]]s such as [[WordNet]].
Line 77 ⟶ 83:
 
==Software that implements the vector space model==
{{further information|Vector database}}
The following software packages may be of interest to those wishing to experiment with vector models and implement search services based upon them.
 
===Free open source software===
* [[Apache Lucene]]. Apache Lucene is a high-performance, open source, full-featured text search engine library written entirely in Java.
* [[OpenSearch (software)]], [[Elasticsearch]] and [[Apache Solr|Solr]] : the 2three most famouswell-known search engine software (many smaller exist)programs based on Lucene. Others are also available.
* [[Gensim]] is a Python+[[NumPy]] framework for Vector Space modelling. It contains incremental (memory-efficient) algorithms for [[tf–idf|term frequency-inverse document frequency]], [[Latent Semantic Indexing|latent semantic indexing]], [[Locality_sensitive_hashingLocality sensitive hashing#Random_projection|Random Projectionsprojection|random projections]] and [[Latent Dirichlet Allocation|latent Dirichlet allocation]].
* [[Weka (machine learning)|Weka]]. Weka is a popular data mining package for Java including WordVectors and [[Bag-of-words model|Bag Of Words models]].
* [[Word2vec]]. Word2vec uses vector spaces for word embeddings.