Probabilistic latent semantic analysis: Difference between revisions

Content deleted Content added
Removed link to Thomas Hofmann page. The page refers to a homonym, not to the author of the paper in question.
See also: per WP:SEEALSO, avoid repeating links in this section
 
(7 intermediate revisions by 5 users not shown)
Line 10:
: <math>P(w,d) = \sum_c P(c) P(d|c) P(w|c) = P(d) \sum_c P(c|d) P(w|c)</math>
 
with '<math>c'</math> being the words' topic. Note that the number of topics is a hyperparameter that must be chosen in advance and is not estimated from the data. The first formulation is the ''symmetric'' formulation, where <math>w</math> and <math>d</math> are both generated from the latent class <math>c</math> in similar ways (using the conditional probabilities <math>P(d|c)</math> and <math>P(w|c)</math>), whereas the second formulation is the ''asymmetric'' formulation, where, for each document <math>d</math>, a latent class is chosen conditionally to the document according to <math>P(c|d)</math>, and a word is then generated from that class according to <math>P(w|c)</math>. Although we have used words and documents in this example, the co-occurrence of any couple of discrete variables may be modelled in exactly the same way.
 
So, the number of parameters is equal to <math>cd + wc</math>. The number of parameters grows linearly with the number of documents. In addition, although PLSA is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents.
Line 19:
PLSA may be used in a discriminative setting, via [[Fisher kernel]]s.<ref>Thomas Hofmann, [https://papers.nips.cc/paper/1654-learning-the-similarity-of-documents-an-information-geometric-approach-to-document-retrieval-and-categorization.pdf ''Learning the Similarity of Documents : an information-geometric approach to document retrieval and categorization''], [[Advances in Neural Information Processing Systems]] 12, pp-914-920, [[MIT Press]], 2000</ref>
 
PLSA has applications in [[information retrieval]] and [[information filtering|filtering]], [[natural language processing]], [[machine learning]] from text, [[bioinformatics]],<ref>{{Cite conference|chapter=Enhanced probabilistic latent semantic analysis with weighting schemes to predict genomic annotations|conference=The 13th IEEE International Conference on BioInformatics and relatedBioEngineering|last1=Pinoli|first1=Pietro|last2=et|first2=al.|title= areasProceedings of IEEE BIBE 2013 |date=2013|publisher=IEEE|pages=1–4|language=en|doi=10.1109/BIBE.2013.6701702|isbn=978-147993163-7}}
</ref> and related areas.
 
It is reported that the [[aspect model]] used in the probabilistic latent semantic analysis has severe [[overfitting]] problems.<ref>{{cite journal|title=Latent Dirichlet Allocation|journal=Journal of Machine Learning Research|year=2003|first=David M.|last=Blei|author2=Andrew Y. Ng |author3=Michael I. Jordan |volume=3|pages=993–1022|url=http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf|doi=10.1162/jmlr.2003.3.4-5.993}}</ref>
Line 46 ⟶ 47:
==External links==
*[https://web.archive.org/web/20050120213347/http://www.cs.brown.edu/people/th/papers/Hofmann-UAI99.pdf Probabilistic Latent Semantic Analysis]
*[https://web.archive.org/web/2017111300512120170717235351/http://www.semanticquery.com/archive/semanticsearchart/researchpLSA.html Complete PLSA DEMO in C#]
 
{{DEFAULTSORT:Probabilistic Latent Semantic Analysis}}