Vector space model: Difference between revisions

Content deleted Content added
use standard variant
 
(46 intermediate revisions by 32 users not shown)
Line 1:
{{Short description|Model for representing text documents}}
'''Vector space model''' or '''term vector model''' is an algebraic model for representing text documents (andor anymore objectsgenerally, in generalitems) as [[vector space|vectors]] ofsuch identifiers,that suchthe as,distance forbetween example,vectors indexrepresents the relevance between the termsdocuments. It is used in [[information filtering]], [[information retrieval]], [[index (search engine)|index]]ing and relevancyrelevance rankings. Its first use was in the [[SMART Information Retrieval System]].<ref>{{cite journal
| last1 = Berry | first1 = Michael W.
| last2 = Drmac | first2 = Zlatko
| last3 = Jessup | first3 = Elizabeth R.
| date = January 1999
| doi = 10.1137/s0036144598347035
| issue = 2
| journal = SIAM Review
| pages = 335–362
| title = Matrices, Vector Spaces, and Information Retrieval
| volume = 41}}</ref>
 
==Definitions==
In this section we consider a particular vector space model based on the [[Bag-of-words model|bag-of-words]] representation. Documents and queries are represented as vectors.
 
:<math>d_j = ( w_{1,j} ,w_{2,j} , \dotsc ,w_{tn,j} )</math>
Documents and queries are represented as vectors.
 
:<math>d_j = ( w_{1,j} ,w_{2,j} , \dotsc ,w_{t,j} )</math>
:<math>q = ( w_{1,q} ,w_{2,q} , \dotsc ,w_{n,q} )</math>
 
Line 12 ⟶ 22:
The definition of ''term'' depends on the application. Typically terms are single words, [[keyword (linguistics)|keyword]]s, or longer phrases. If words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary (the number of distinct words occurring in the [[text corpus|corpus]]).
 
Vector operations can be used to compare documents with queries.<ref name=":0">{{Cite book |last=Büttcher |first=Stefan |title=Information retrieval: implementing and evaluating search engines |last2=Clarke |first2=Charles L. A. |last3=Cormack |first3=Gordon V. |date=2016 |publisher=The MIT Press |isbn=978-0-262-52887-0 |edition=First MIT Press paperback |___location=Cambridge, Massachusetts London, England}}</ref>
Vector operations can be used to compare documents with queries.
 
==Applications==
[[ImageFile:vector space model.jpg|right|250px]]
 
Candidate documents from the corpus can be retrieved and ranked using a variety of methods. [[Relevance (information retrieval)|Relevance]] [[ranking]]s of documents in a keyword search can be calculated, using the assumptions of [[semantic similarity|document similarities]] theory, by comparing the deviation of angles between each document vector and the original query vector where the query is represented as a vector with same dimension as the vectors that represent the other documents.
[[Image:vector space model.jpg|right|250px]]
 
[[Relevance (information retrieval)|Relevance]] [[ranking]]s of documents in a keyword search can be calculated, using the assumptions of [[semantic similarity|document similarities]] theory, by comparing the deviation of angles between each document vector and the original query vector where the query is represented as a vector with same dimension as the vectors that represent the other documents.
 
In practice, it is easier to calculate the [[cosine]] of the angle between the vectors, instead of the angle itself:
Line 32 ⟶ 41:
</math>
 
Using the cosine the similarity between document ''d<sub>j</sub>'' and query ''q'' can be calculated as:
As all vectors under consideration by this model are elementwise nonnegative, a cosine value of zero means that the query and document vector are [[orthogonal]] and have no match (i.e. the query term does not exist in the document being considered). See [[cosine similarity]] for further information.
 
:<math>\mathrm{cos}(d_j,q) = \frac{\mathbf{d_j} \cdot \mathbf{q}}{\left\| \mathbf{d_j} \right\| \left \| \mathbf{q} \right\|} = \frac{\sum _{i=1}^N w_d_{i,j}w_q_{i,q}}{\sqrt{\sum _{i=1}^N w_d_{i,j}^2}\sqrt{\sum _{i=1}^N w_q_{i,q}^2}}</math>
==Example: tf-idf weights==
 
As all vectors under consideration by this model are elementwiseelement-wise nonnegative, a cosine value of zero means that the query and document vector are [[orthogonal]] and have no match (i.e. the query term does not exist in the document being considered). See [[cosine similarity]] for further information.<ref name=":0" />
In the classic vector space model proposed by [[Gerard Salton|Salton]], Wong and Yang <ref>[http://doi.acm.org/10.1145/361219.361220 G. Salton , A. Wong , C. S. Yang, A vector space model for automatic indexing], Communications of the ACM, v.18 n.11, p.613-620, Nov. 1975</ref> the term-specific weights in the document vectors are products of local and global parameters. The model is known as [[tf-idf|term frequency-inverse document frequency]] model. The weight vector for document ''d'' is <math>\mathbf{v}_d = [w_{1,d}, w_{2,d}, \ldots, w_{N,d}]^T</math>, where
 
== Term frequency–inverse document frequency (if–idf) weights==
In the classic vector space model proposed by [[Gerard Salton|Salton]], Wong and Yang ,<ref>[http://doi.acm.org/10.1145/361219.361220 G. Salton , A. Wong , C. S. Yang, A vector space model for automatic indexing], Communications of the ACM, v.18 n.11, p.613-620613–620, Nov. 1975</ref> the term-specific weights in the document vectors are products of local and global parameters. The model is known as [[tf-idf|term frequency-inversefrequency–inverse document frequency]] (if–idf) model. The weight vector for document ''d'' is <math>\mathbf{v}_d = [w_{1,d}, w_{2,d}, \ldots, w_{N,d}]^T</math>, where
 
:<math>
Line 45 ⟶ 57:
* <math>\mathrm{tf}_{t,d}</math> is term frequency of term ''t'' in document ''d'' (a local parameter)
* <math>\log{\frac{|D|}{|\{d' \in D \, | \, t \in d'\}|}}</math> is inverse document frequency (a global parameter). <math>|D|</math> is the total number of documents in the document set; <math>|\{d' \in D \, | \, t \in d'\}|</math> is the number of documents containing the term ''t''.
 
Using the cosine the similarity between document ''d<sub>j</sub>'' and query ''q'' can be calculated as:
 
:<math>\mathrm{cos}(d_j,q) = \frac{\mathbf{d_j} \cdot \mathbf{q}}{\left\| \mathbf{d_j} \right\| \left \| \mathbf{q} \right\|} = \frac{\sum _{i=1}^N w_{i,j}w_{i,q}}{\sqrt{\sum _{i=1}^N w_{i,j}^2}\sqrt{\sum _{i=1}^N w_{i,q}^2}}</math>
 
==Advantages==
 
The vector space model has the following advantages over the [[Standard Boolean model]]:
 
#Simple model based on linear algebra
#Term weights not binary
#Allows computing a continuous degree of similarity between queries and documents
#Allows ranking documents according to their possible relevance
#Allows retrieving items with a partial matchingterm overlap<ref name=":0" />
Most of these advantages are a consequence of the difference in the density of the document collection representation between Boolean and tfterm frequency-idfinverse document frequency approaches. When using Boolean weights, any document lies in a vertex in a n-dimensional [[hypercube]]. Therefore, the possible document representations are <math>2^n</math> and the maximum Euclidean distance between pairs is <math>\sqrt{n}</math>. As documents are added to the document collection, the region defined by the hypercube's vertices become more populated and hence denser. Unlike Boolean, when a document is added using tfterm frequency-idfinverse document frequency weights, the idfsinverse document frequencies of the terms in the new document decrease while that of the remaining terms increase. In average, as documents are added, the region where documents lie expands regulating the density of the entire collection representation. This behavior models the original motivation of Salton and his colleagues that a document collection represented in a low density region could yield better retrieval results.
 
==Limitations==
 
The vector space model has the following limitations:
 
#Query terms are assumed to be independent, so phrases might not be represented well in the ranking
#Long documents are poorly represented because they have poor similarity values (a small [[scalar product]] and a [[curse of dimensionality|large dimensionality]])
#Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a<ref name=":0"[[false negative]] match"./>
#Search keywords must precisely match document terms; word [[substring]]s might result in a "[[false positive]] match"
#Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a "[[false negative]] match".
#The order in which the terms appear in the document is lost in the vector space representation.
#Theoretically assumes terms are statistically independent.
#Weighting is intuitive but not very formal.
 
Many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such as [[singular value decomposition]] and [[lexical database]]s such as [[WordNet]].
 
==Models based on and extending the vector space model==
 
Models based on and extending the vector space model include:
* [[Generalized vector space model]]
Line 82 ⟶ 80:
* [[Rocchio Classification]]
* [[Random indexing]]
* [[Search Engine Optimization]]
 
==Software that implements the vector space model==
{{further information|Vector database}}
 
The following software packages may be of interest to those wishing to experiment with vector models and implement search services based upon them.
 
===Free open source software===
* [[ElasticsearchApache Lucene]]. AnotherApache Lucene is a high-performance, open source, full-featured text search engine usinglibrary Lucenewritten entirely in Java.
 
* [[ApacheOpenSearch Lucene(software)]]., [[Elasticsearch]] and [[Apache LuceneSolr|Solr]]: isthe athree high-performance,most fullwell-featured textknown search engine libraryprograms based on Lucene. writtenOthers entirelyare inalso Javaavailable.
* [[Gensim]] is a Python+[[NumPy]] framework for Vector Space modelling. It contains incremental (memory-efficient) algorithms for [[Tf–idftf–idf|term frequency-inverse document frequency]], [[Latent Semantic Indexing|latent semantic indexing]], [[Locality_sensitive_hashingLocality sensitive hashing#Random_projection|Random Projectionsprojection|random projections]] and [[Latent Dirichlet Allocation|latent Dirichlet allocation]].
* [[Elasticsearch]]. Another high-performance, full-featured text search engine using Lucene.
* [[Gensim]] is a Python+[[NumPy]] framework for Vector Space modelling. It contains incremental (memory-efficient) algorithms for [[Tf–idf]], [[Latent Semantic Indexing]], [[Locality_sensitive_hashing#Random_projection|Random Projections]] and [[Latent Dirichlet Allocation]].
* [[Weka (machine learning)|Weka]]. Weka is a popular data mining package for Java including WordVectors and [[Bag-of-words model|Bag Of Words models]].
* [[Word2vec]]. Word2vec uses vector spaces for word embeddings.
 
==Further reading==
* [[Gerard Salton|G. Salton]] (1962), "[https://dl.acm.org/citation.cfm?id=1461544 Some experiments in the generation of word and document associations]" ''Proceeding AFIPS '62 (Fall) Proceedings of the December 4–6, 1962, fall joint computer conference'', pages 234–250. ''(Early paper of Salton using the term-document matrix formalization)''
 
* [[Gerard Salton|G. Salton]], A. Wong, and C. S. Yang (1975), "[httphttps://wwwdl.csacm.uiuc.edu/class/fa05/cs511/Spring05/other_papersorg/p613-saltoncitation.pdfcfm?id=361220 A Vector Space Model for Automatic Indexing]," ''Communications of the ACM'', vol. 18, nr. 11, pages 613–620. ''(Article in which a vector space model was presented)''
* David Dubin (2004), [httphttps://www.ideals.uiucillinois.edu/bitstreamitems/2142/1697/2/Dubin748764.pdf1790 The Most Influential Paper Gerard Salton Never Wrote] ''(Explains the history of the Vector Space Model and the non-existence of a frequently cited publication)''
* [https://web.archive.org/web/20110814000253/http://ispcogsys.imm.dtu.dk/thor/projects/multimedia/textmining/node5.html Description of the vector space model]
* [http://www.miislita.com/term-vector/term-vector-3.html Description of the classic vector space model by Dr E. Garcia]
* [http://nlp.stanford.edu/IR-book/html/htmledition/vector-space-classification-1.html Relationship of vector space search to the "k-Nearest Neighbor" search]
 
==See also==
{{cmn|
*[[Bag-of-words model]]
*[[Champion list]]
*[[Compound term processing]]
*[[Conceptual space]]
Line 112:
*[[Sparse distributed memory]]
*[[w-shingling]]
}}
 
==References==
{{reflist}}
<references/>
 
[[Category:Vector space model|* ]]