Non-negative matrix factorization

This is an old revision of this page, as edited by ChrisDing (talk | contribs) at 04:24, 27 January 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

NMF redirects here. For the bridge convention, see new minor forcing.

Non-negative matrix factorization (NMF) is a group of algorithms in multivariate analysis and linear algebra where a matrix, , is factorized into (usually) two matrices, and

Factorization of matrices is generally non-unique, and a number of different methods of doing so have been developed (e.g. principal component analysis and singular value decomposition) by incorporating different constraints; non-negative matrix factorization differs from these methods in that it enforces the constraint that all three matrices must be non-negative, i.e., all elements must be equal to or greater than zero.

Usually the number of columns of W and the number of rows of H in NMF are selected so the product WH will become an approximation to X (it has been suggested that the NMF model should be called nonnegative matrix approximation instead). The full decomposition of X then amounts to the two non-negative matrices W and H as well as a residual U:

The elements of the residual matrix can either be negative and positive - at least in the typical application of NMF.

Early work research on non-negative matrix factorizations was performed by a Finnish group of researchers in the middle of the 1990s under the name positive matrix factorization. It became more widely known after Lee and Seung's investigations of the properties of the algorithm, and after they published a simple useful algorithm.

There are different types of non-negative matrix factorizations and one of these is related to probabilistic latent semantic analysis and the latent class model. The different types arise from using different cost functions (divergence functions) and/or by regularization of the W and/or H matrices[1].

Relatioing to Clustering

Although initially NMF is considered to be different from vector quantization (K-means clustering), it was later shown [2] that NMF is equivalent to a relaxed K-means clustering using the Frobenius norm objective function where W is the cluster centroids and H is the cluster indicator; therefore NMF provides a new framework for data clustering. It is also known [3] that NMF is identical to probabilistic latent semantic analysis using the Kullback–Leibler divergence objective function, which can be simplified to Chi-square function at first order approximation.

Uniqueness

The factorization is not unique: A matrix and its inverse can be used to transform the two factorization matrices by, e.g.,

 

If the two new matrices   and   are non-negative they form another parametrization of the factorization.

The non-negativity of   and   applies at least if B is a non-negative monomial matrix. In this simple case it will just correspond to a scaling and a permutation.

More control over the non-uniqueness of NMF is obtained with sparsity constraints[4].

References

  1. ^ Inderjit S. Dhillon, Suvrit Sra, "Generalized Nonnegative Matrix Approximations with Bregman Divergences", NIPS, 2005.
  2. ^ Chris Ding, Xiaofeng He, and Horst D. Simon. "On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering". Proc. SIAM Int'l Conf. Data Mining (SDM'05), pp:606-610, April 2005.
  3. ^ Chris Ding and Xiaofeng He, " Nonnegative Matrix Factorization and Probabilistic Latent Semantic Indexing: Equivalence, Chi-square Statistic, and a Hybrid Method", Proc. AAAI National Conf. on Artificial Intelligence (AAAI-06), July 2006.
  4. ^ Julian Eggert, Edgar Körner, "Sparse coding and NMF", Proceedings. 2004 IEEE International Joint Conference on Neural Networks, 2004., pp. 2529-2533, 2004.