Content deleted Content added
No edit summary |
Added ref. to Buntine's multinomial PCA + earlier ref to NMF-PLSA equivalence |
||
Line 17:
The different types arise from using different [[cost function]]s (divergence functions) and/or by [[regularization (mathematics)|regularization]] of the '''W''' and/or '''H''' matrices<ref>[[Inderjit S. Dhillon]], [[Suvrit Sra]], "[http://books.nips.cc/papers/files/nips18/NIPS2005_0203.pdf Generalized Nonnegative Matrix Approximations with Bregman Divergences]", [[NIPS]], 2005.</ref>.
==
Although initially NMF is considered to be different from vector quantization ([[K-means clustering]]), it was later shown
<ref>
Chris Ding, Xiaofeng He, and Horst D. Simon. "On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering". Proc. SIAM Int'l Conf. Data Mining (SDM'05), pp:606-610, April 2005.</ref>
that NMF is equivalent to the relaxed [[K-means clustering]] using the Frobenius norm objective function, matrix factor '''W''' contains cluster centroids and '''H''' contains cluster membership indicators; therefore NMF provides a
It is also known that NMF is an instance of so-called "multinomial PCA".
<ref>Wray Buntine, "Variational Extensions to EM and Multinomial PCA", Proc. European Conference on Machine Learning (ECML-02), LNAI 2430, pp. 23-34, 2002. </ref>
When NMF is obtained by minimizing the [[Kullback–Leibler divergence]], it is also equivalent to another instance of multinomial PCA, [[probabilistic latent semantic analysis]],
<ref>Eric Gaussier and Cyril Goutte, "Relation between PLSA and NMF and Implications", Proc. 28th international ACM SIGIR conference on Research and development in information retrieval (SIGIR-05), pp. 601-602, 2005. </ref> which has long been used for analyzing and clustering textual data.
== Uniqueness ==
|