Non-negative matrix factorization: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: doi updated in citation with #oabot.
Clustering property: remove incorrect claim lacking citation
Line 98:
Furthermore, the computed <math>H</math> gives the cluster membership, i.e., if <math>\mathbf{H}_{kj} > \mathbf{H}_{ij} </math> for all ''i'' ≠ ''k'', this suggests that the input data <math> v_j </math> belongs to <math>k</math>-th cluster. The computed <math>W</math> gives the cluster centroids, i.e., the <math>k</math>-th column gives the cluster centroid of <math>k</math>-th cluster. This centroid's representation can be significantly enhanced by convex NMF.
 
When the orthogonality constraint <math> \mathbf{H}\mathbf{H}^T = I </math> is not explicitly imposed, the orthogonality holds to a large extent, and the clustering property holds too. Clustering is the main objective of most [[data mining]] applications of NMF.{{citation needed|date=April 2015}}
 
When the error function to be used is [[Kullback–Leibler divergence]], NMF is identical to the [[probabilistic latent semantic analysis]] (PLSA), a popular document clustering method.<ref>{{cite journal |vauthors=Ding C, Li Y, Peng W |url=http://users.cis.fiu.edu/~taoli/pub/NMFpLSIequiv.pdf |title=On the equivalence between non-negative matrix factorization and probabilistic latent semantic indexing |archive-url=https://web.archive.org/web/20160304070027/http://users.cis.fiu.edu/~taoli/pub/NMFpLSIequiv.pdf |archive-date=2016-03-04 |url-status=dead |journal=Computational Statistics & Data Analysis |year=2008 |volume=52 |issue=8 |pages=3913–3927|doi=10.1016/j.csda.2008.01.011 }}</ref>