Non-negative matrix factorization

This is an old revision of this page, as edited by 132.246.126.133 (talk) at 15:42, 15 March 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

NMF redirects here. For the bridge convention, see new minor forcing.

Non-negative matrix factorization (NMF) is a group of algorithms in multivariate analysis and linear algebra where a matrix, , is factorized into (usually) two matrices, and  :

Factorization of matrices is generally non-unique, and a number of different methods of doing so have been developed (e.g. principal component analysis and singular value decomposition) by incorporating different constraints; non-negative matrix factorization differs from these methods in that it enforces the constraint that all three matrices must be non-negative, i.e., all elements must be equal to or greater than zero.

Usually the number of columns of W and the number of rows of H in NMF are selected so the product WH will become an approximation to X (it has been suggested that the NMF model should be called nonnegative matrix approximation instead). The full decomposition of X then amounts to the two non-negative matrices W and H as well as a residual U: : The elements of the residual matrix can either be negative and positive - at least in the typical application of NMF.

History

Early work research on non-negative matrix factorizations was performed by a Finnish group of researchers in the middle of the 1990s under the name positive matrix factorization.[1][2] It became more widely known as non-negative matrix factorization after Lee and Seung investigated the properties of the algorithm and published some simple and useful algorithms for two types of factorizations.[3] [4]

Types

There are different types of non-negative matrix factorizations. The different types arise from using different cost functions (divergence functions) and/or by regularization of the W and/or H matrices.[5]

Relation to other Techniques

The initial paper by Lee & Seung proposed NMF mainly for parts-based decomposition of images. It compares NMF to vector quantization and principal component analysis, and shows that although the three techniques may be written as factorizations, they implement different constraints and therefore produce different results.

It was later shown that NMF is an instance of a more general probabilistic model called "multinomial PCA".[6] When NMF is obtained by minimizing the Kullback–Leibler divergence, it is in fact equivalent to another instance of multinomial PCA, probabilistic latent semantic analysis,[7][8] trained by maximum likelihood estimation. That method is commonly used for analyzing and clustering textual data and is also related to the latent class model.

It was also shown[9] that when the Frobenius norm is used as a divergence, NMF is equivalent to a relaxed form of K-means clustering: matrix factor W contains cluster centroids and H contains cluster membership indicators. This also justifies the use of NMF for data clustering.

NMF extends beyond matrices to tensors of arbitrary order.[10]


Uniqueness

The factorization is not unique: A matrix and its inverse can be used to transform the two factorization matrices by, e.g.,

 

If the two new matrices   and   are non-negative they form another parametrization of the factorization.

The non-negativity of   and   applies at least if B is a non-negative monomial matrix. In this simple case it will just correspond to a scaling and a permutation.

More control over the non-uniqueness of NMF is obtained with sparsity constraints.[11]

References

  1. ^ P. Paatero, U. Tapper (1994). "Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values". Environmetrics. 5: 111–126. doi:10.1002/env.3170050203. {{cite journal}}: Check date values in: |year= (help)CS1 maint: year (link)
  2. ^ Pia Anttila, Pentti Paatero, Unto Tapper, Olli Järvinen (1995). "Source identification of bulk wet deposition in Finland by positive matrix factorization". Atmospheric Environment. 29 (14): 1705–1718. doi:10.1016/1352-2310(94)00367-T.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Daniel D. Lee and H. Sebastian Seung (1999). "Learning the parts of objects by non-negative matrix factorization", Nature 401(6755), pp. 788-791.
  4. ^ Daniel D. Lee and H. Sebastian Seung (2001). "Algorithms for Non-negative Matrix Factorization", Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference, pp. 556-562, MIT Press.
  5. ^ Inderjit S. Dhillon, Suvrit Sra, "Generalized Nonnegative Matrix Approximations with Bregman Divergences", NIPS, 2005.
  6. ^ Wray Buntine, "Extensions to EM and Multinomial PCA", Proc. European Conference on Machine Learning (ECML-02), LNAI 2430, pp. 23-34, 2002.
  7. ^ Eric Gaussier and Cyril Goutte (2005). "Relation between PLSA and NMF and Implications", Proc. 28th international ACM SIGIR conference on Research and development in information retrieval (SIGIR-05), pp. 601-602.
  8. ^ Chris Ding, Tao Li, Wei Peng (2006). "Nonnegative Matrix Factorization and Probabilistic Latent Semantic Indexing: Equivalence, Chi-square Statistic, and a Hybrid Method", Proc. of AAAI National Conf. on Artificial Intelligence (AAAI-06).
  9. ^ Chris Ding, Xiaofeng He, and Horst D. Simon (2005). "On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering". Proc. SIAM Int'l Conf. Data Mining, pp. 606-610.
  10. ^ Max Welling and Markus Weber (2001). "Positive Tensor Factorization", Pattern Recognition Letters, 22(12), pp. 1255-1261.
  11. ^ Julian Eggert, Edgar Körner, "Sparse coding and NMF", Proceedings. 2004 IEEE International Joint Conference on Neural Networks, 2004., pp. 2529-2533, 2004.