Non-negative matrix factorization: Difference between revisions

Content deleted Content added
PLSA
More information from Dhillon's paper.
Line 3:
 
Usually all three matrices must be [[non-negative matrix|non-negative]], i.e., all elements must be equal to or greater than zero.
It might be said to be a sort of non-negative version of [[singular value decomposition]].
Usually the numbers of columns of <b>W</b> and the numbers of rows of <b>H</b> is selected so the product <b>WH</b> will become an approximation to <b>X</b>, and it has been suggested that the NMF model should be called ''nonnegative matrix approximation'' instead.
 
It was used by a Finnish group of researchers in the middle of the 1990s under the name ''positive matrix factorization''.
Line 9 ⟶ 10:
 
There are different types of non-negative matrix factorizations and one of these is related to [[probabilistic latent semantic analysis]] and the [[latent class model]].
They different types arise from using different [[cost function]]s (divergence functions) and/or by [[regularization (mathematics)|regularization]] of the '''W''' and/or '''H''' matrices<ref>[[Inderjit S. Dhillon]], [[Suvrit Sra]], "[http://books.nips.cc/papers/files/nips18/NIPS2005_0203.pdf Generalized Nonnegative Matrix Approximations with Bregman Divergences]", [[NIPS]], 2005.</ref>.
 
== Uniqueness ==
Line 24 ⟶ 26:
* [[Daniel D. Lee]] and [[H. Sebastian Seung]], "Learning the parts of objects by non-negative matrix factorization", ''[[Nature (journal)|Nature]]'', 401(6755):788-791, 1999 October.
* Daniel D. Lee and H. Sebastian Seung, "[http://www.nips.cc/Web/Groups/NIPS/NIPS2000/00papers-pub-on-web/LeeSeung.ps.gz Algorithms for Non-negative Matrix Factorization]", ''[[NIPS|Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference]]'', 556-562, [[MIT Press]], 2001.
 
=== References ===
<references/>
 
 
[[Category:Linear algebra]]