CUR matrix approximation: Difference between revisions

Content deleted Content added
mNo edit summary
Line 1:
A '''CUR matrix approximation''' is a set of three [[Matrixmatrix (mathematics)|matrices]] that, when multiplied together, closely approximate a given matrix.<ref name=mahoney>{{cite web|title=CUR matrix decompositions for improved data analysis|url=http://www.pnas.org/content/106/3/697.full|accessdate=26 June 2012|author=Michael W. Mahoney|author2=Petros Drineas}}</ref><ref>{{cite conference|title=Optimal CUR matrix decompositions| conference = STOC '14 Proceedings of the forty-sixth annual ACM symposium on Theory of Computing|last1= Boutsidis |first1= Christos |last2=Woodruff|first2=David P.|year=2014}}</ref><ref>{{cite conference|title=Low Rank Approximation with Entrywise L1-Norm Error| conference = STOC '17 Proceedings of the forty-ninth annual ACM symposium on Theory of Computing|last1=Song|first1=Zhao|last2=Woodruff|first2=David P.|last3=Zhong|first3=Peilin|year=2017| arxiv = 1611.00898}}</ref> A CUR approximation can be used in the same way as the [[low-rank approximation]] of the [[Singularsingular value decomposition]] (SVD). CUR approximations are less accurate than the SVD, but they offer two key advantages, both stemming from the fact that the rows and columns come from the original matrix (rather than left and right singular vectors):
 
* There are methods to calculate it with lower asymptotic time complexity versus the SVD.
Line 8:
The CUR matrix approximation is often {{citation needed|date=November 2012}} used in place of the low-rank approximation of the SVD in [[principal component analysis]]. The CUR is less accurate, but the columns of the matrix ''C'' are taken from ''A'' and the rows of ''R'' are taken from ''A''. In PCA, each column of ''A'' contains a data sample; thus, the matrix ''C'' is made of a subset of data samples. This is much easier to interpret than the SVD's left singular vectors, which represent the data in a rotated space. Similarly, the matrix ''R'' is made of a subset of variables measured for each data sample. This is easier to comprehend than the SVD's right singular vectors, which are another rotations of the data in space.
 
==Mathematical Definitiondefinition==
 
Hamm and Huang<ref>Keaton Hamm and Longxiu Huang. Perspectives on CUR decompositions. Applied and Computational Harmonic Analysis, 48(3):1088–1099, 2020.</ref> gives the following theorem describing the basics of a CUR decomposition of a matrix <math>L</math> with rank <math>r</math>:
Line 14:
Theorem: Consider row and column indices <math>I, J \subseteq [n]</math> with <math>|I|, |J| \ge r</math>.
Denote submatrices <math>C = L_{:,J},</math> <math>U = L_{I,J}</math> and <math>R = L_{I,:}</math>.
If rank(<math>U</math>) = rank(<math>L</math>), then <math>L = CU^+R</math>, where <math>(\cdot)^+</math> denotes the [[Moore-PenroseMoore–Penrose pseudoinverse]].
 
In other words, if <math>L</math> has low rank, we can take a sub-matrix <math>U = L_{I,J}</math> of the same rank, together with some rows <math>R</math> and columns <math>C</math> of <math>L</math> and use them to reconstruct <math>L</math>.