Non-negative matrix factorization: Difference between revisions

Content deleted Content added
History: Cite Q
m Cite Q
Line 126:
=== Different cost functions and regularizations ===
There are different types of non-negative matrix factorizations.
The different types arise from using different [[Loss function|cost function]]s for measuring the divergence between {{math|'''V'''}} and {{math|'''WH'''}} and possibly by [[regularization (mathematics)|regularization]] of the {{math|'''W'''}} and/or {{math|'''H'''}} matrices.<ref name="dhillon">{{Cite conferenceQ | authorQ77685465 = Inderjit S. Dhillon | author-link = Inderjit S. Dhillon | author2 = Suvrit Sra| author2-link = Suvrit Sra | url = https://papers.nips.cc/paper/2757-generalized-nonnegative-matrix-approximations-with-bregman-divergences.pdf |title = Generalized Nonnegative Matrix Approximations with Bregman Divergences | conference = [[Conference on Neural Information Processing Systems|NIPS]] | year = 2005}}</ref>
 
Two simple divergence functions studied by Lee and Seung are the squared error (or [[Frobenius norm]]) and an extension of the Kullback–Leibler divergence to positive matrices (the original [[Kullback–Leibler divergence]] is defined on probability distributions).