Content deleted Content added
m Task 18 (cosmetic): eval 1 template: del empty params (1×); |
→See also: per WP:SEEALSO, avoid repeating links in this section |
||
(8 intermediate revisions by 6 users not shown) | |||
Line 10:
: <math>P(w,d) = \sum_c P(c) P(d|c) P(w|c) = P(d) \sum_c P(c|d) P(w|c)</math>
with
So, the number of parameters is equal to <math>cd + wc</math>. The number of parameters grows linearly with the number of documents. In addition, although PLSA is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents.
Line 19:
PLSA may be used in a discriminative setting, via [[Fisher kernel]]s.<ref>Thomas Hofmann, [https://papers.nips.cc/paper/1654-learning-the-similarity-of-documents-an-information-geometric-approach-to-document-retrieval-and-categorization.pdf ''Learning the Similarity of Documents : an information-geometric approach to document retrieval and categorization''], [[Advances in Neural Information Processing Systems]] 12, pp-914-920, [[MIT Press]], 2000</ref>
PLSA has applications in [[information retrieval]] and [[information filtering|filtering]], [[natural language processing]], [[machine learning]] from text, [[bioinformatics]],<ref>{{Cite conference|chapter=Enhanced probabilistic latent semantic analysis with weighting schemes to predict genomic annotations|conference=The 13th IEEE International Conference on BioInformatics and
</ref> and related areas.
It is reported that the [[aspect model]] used in the probabilistic latent semantic analysis has severe [[overfitting]] problems.<ref>{{cite journal|title=Latent Dirichlet Allocation|journal=Journal of Machine Learning Research|year=2003|first=David M.|last=Blei|author2=Andrew Y. Ng |author3=Michael I. Jordan |volume=3|pages=993–1022|url=http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf|doi=10.1162/jmlr.2003.3.4-5.993}}</ref>
Line 34 ⟶ 35:
==History==
This is an example of a [[latent class model]] (see references therein), and it is related<ref>Chris Ding, Tao Li, Wei Peng (2006). "[http://www.aaai.org/Papers/AAAI/2006/AAAI06-055.pdf Nonnegative Matrix Factorization and Probabilistic Latent Semantic Indexing: Equivalence Chi-Square Statistic, and a Hybrid Method. AAAI 2006" ]</ref><ref>Chris Ding, Tao Li, Wei Peng (2008). "[http://www.sciencedirect.com/science/article/pii/S0167947308000145 On the equivalence between Non-negative Matrix Factorization and Probabilistic Latent Semantic Indexing"]</ref> to [[non-negative matrix factorization]]. The present terminology was coined in 1999 by
== See also ==
Line 46 ⟶ 47:
==External links==
*[https://web.archive.org/web/20050120213347/http://www.cs.brown.edu/people/th/papers/Hofmann-UAI99.pdf Probabilistic Latent Semantic Analysis]
*[https://web.archive.org/web/
{{DEFAULTSORT:Probabilistic Latent Semantic Analysis}}
|