Content deleted Content added
adding intro details |
specification of general concept |
||
Line 1:
{{Unreferenced stub|auto=yes|date=December 2009}}
A '''document-term matrix''' is a mathematical [[Matrix (mathematics)|matrix]] that describes the frequency of terms that occur in a collection of documents.In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix is a specific instance of a '''document-feature matrix''' where "features" may refer to other properties of a document besides terms.<ref>{{Cite web|title=Document-feature matrix :: Tutorials for quanteda|url=https://tutorials.quanteda.io/basic-operations/dfm/|access-date=2021-01-02|website=tutorials.quanteda.io}}</ref> It is also common to encounter the transpose, or '''term-document matrix''' where documents are the columns and terms are the rows. They are useful in the field of [[natural language processing]] and [[computational text analysis]].<ref>{{Cite web|title=15 Ways to Create a Document-Term Matrix in R|url=https://www.dustinstoltz.com/blog/2020/12/1/creating-document-term-matrix-comparison-in-r|access-date=2021-01-02|website=Dustin S. Stoltz|language=en-US}}</ref> While the value of the cells is commonly the raw count of a given term, there are various schemes for weighting the raw counts such as relative frequency/proportions and [[tf-idf]].
Terms are commonly single tokens separated by whitespace or punctuation on either side, or unigrams. In such a case, this is also referred to as "bag of words" representation because the counts of individual words is retained, but not the order of the words in the document.
==General Concept==
When creating a
*D1 = "I like databases"
*D2 = "I dislike databases",
Line 14 ⟶ 16:
|'''D2'''||1||0||1||1
|}
which shows which documents contain which terms and how many times they appear.
As a result of the power-law distribution of tokens in nearly every corpus (see [[Zipf's law|Zipf's law)]], it is common to weight the counts. This can be as simple as dividing counts by the total number of tokens in a document (called relative frequency or proportions), dividing by the maximum frequency in each document (called prop max), or taking the log of frequencies (called log count). If one desires to weight the words most unique to an individual document as compared to the corpus as a whole, it is common to use [[tf-idf]], which divides the term frequency by the inverse of the term's document frequency.
==Choice of Terms==
|