CUR matrix approximation: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 7:
 
The CUR matrix approximation is often {{citation needed|date=November 2012}} used in place of the low-rank approximation of the SVD in [[principal component analysis]]. The CUR is less accurate, but the columns of the matrix ''C'' are taken from ''A'' and the rows of ''R'' are taken from ''A''. In PCA, each column of ''A'' contains a data sample; thus, the matrix ''C'' is made of a subset of data samples. This is much easier to interpret than the SVD's left singular vectors, which represent the data in a rotated space. Similarly, the matrix ''R'' is made of a subset of variables measured for each data sample. This is easier to comprehend than the SVD's right singular vectors, which are another rotations of the data in space.
==Matrix CUR==
Hamm<ref>Keaton Hamm and Longxiu Huang. Perspectives on CUR decompositions. Applied and Computational Harmonic Analysis, 48(3):1088–1099, 2020.</ref> and Aldroubi et al.<ref>Aldroubi, Akram and Hamm, Keaton and Koku, Ahmet Bugra and Sekmen, Ali. CUR decompositions, similarity matrices, and subspace clustering. Frontiers in Applied Mathematics and Statistics, 2019, Frontiers Media SA</ref> describe the following theorem, which outlines a CUR decomposition of a matrix <math>L</math> with rank <math>r</math>:
 
Theorem: Consider row and column indices <math>I, J \subseteq [n]</math> with <math>|I|, |J| \ge r</math>. Denote submatrices <math>C = L_{:,J},</math> <math>U = L_{I,J}</math> and <math>R = L_{I,:}</math>. If rank(<math>U</math>) = rank(<math>L</math>), then <math>L = CU^+R</math>, where <math>(\cdot)^+</math> denotes the [[Moore–Penrose pseudoinverse]].
 
In other words, if <math>L</math> has low rank, we can take a sub-matrix <math>U = L_{I,J}</math> of the same rank, together with some rows <math>R</math> and columns <math>C</math> of <math>L</math> and use them to reconstruct <math>L</math>.
 
==Tensor CUR==
Tensor-CURT decomposition<ref>{{cite arXiv|title=Relative Error Tensor Low Rank Approximation|eprint = 1704.08246|last1=Song|first1=Zhao|last2=Woodruff|first2=David P.|last3=Zhong|first3=Peilin|class=cs.DS|year=2017}}</ref>
is a generalization of matrix-CUR decomposition. Formally, a CURT tensor approximation of a tensor ''A'' is three matrices and a (core-)tensor ''C'', ''R'', ''T'' and ''U'' such that ''C'' is made from columns of ''A'', ''R'' is made from rows of ''A'', ''T'' is made from tubes of ''A'' and that the product ''U(C,R,T)'' (where the <math>i,j,l</math>-th entry of it is <math>\sum_{i',j',l'}U_{i',j',l'}C_{i,i'}R_{j,j'}T_{l,l'} </math>) closely approximates ''A''. Usually the CURT is selected to be a [[Rank (linear algebra)|rank]]-''k'' approximation, which means that ''C'' contains ''k'' columns of ''A'', ''R'' contains ''k'' rows of ''A'', ''T'' contains tubes of ''A'' and ''U'' is a ''k''-by-''k''-by-''k'' (core-)tensor.
==Algorithms==
 
Line 18 ⟶ 24:
taking <math>|J| \approx k /\varepsilon^4</math> and <math>|I| \approx k / \varepsilon^2</math> where <math>0 \le \varepsilon</math>, the algorithm achieves Frobenius error bound <math>\|A - CUR\|_F \le \|A - A_k\|_F + \varepsilon \|A\|_F</math>, where <math>A_k</math> is the optimal rank ''k'' approximation.
 
==Tensor==
Tensor-CURT decomposition<ref>{{cite arXiv|title=Relative Error Tensor Low Rank Approximation|eprint = 1704.08246|last1=Song|first1=Zhao|last2=Woodruff|first2=David P.|last3=Zhong|first3=Peilin|class=cs.DS|year=2017}}</ref>
is a generalization of matrix-CUR decomposition. Formally, a CURT tensor approximation of a tensor ''A'' is three matrices and a (core-)tensor ''C'', ''R'', ''T'' and ''U'' such that ''C'' is made from columns of ''A'', ''R'' is made from rows of ''A'', ''T'' is made from tubes of ''A'' and that the product ''U(C,R,T)'' (where the <math>i,j,l</math>-th entry of it is <math>\sum_{i',j',l'}U_{i',j',l'}C_{i,i'}R_{j,j'}T_{l,l'} </math>) closely approximates ''A''. Usually the CURT is selected to be a [[Rank (linear algebra)|rank]]-''k'' approximation, which means that ''C'' contains ''k'' columns of ''A'', ''R'' contains ''k'' rows of ''A'', ''T'' contains tubes of ''A'' and ''U'' is a ''k''-by-''k''-by-''k'' (core-)tensor.
 
==Matrix CUR==
 
Hamm<ref>Keaton Hamm and Longxiu Huang. Perspectives on CUR decompositions. Applied and Computational Harmonic Analysis, 48(3):1088–1099, 2020.</ref> and Aldroubi et al.<ref>Aldroubi, Akram and Hamm, Keaton and Koku, Ahmet Bugra and Sekmen, Ali. CUR decompositions, similarity matrices, and subspace clustering. Frontiers in Applied Mathematics and Statistics, 2019, Frontiers Media SA</ref> describe the following theorem, which outlines a CUR decomposition of a matrix <math>L</math> with rank <math>r</math>:
 
Theorem: Consider row and column indices <math>I, J \subseteq [n]</math> with <math>|I|, |J| \ge r</math>. Denote submatrices <math>C = L_{:,J},</math> <math>U = L_{I,J}</math> and <math>R = L_{I,:}</math>. If rank(<math>U</math>) = rank(<math>L</math>), then <math>L = CU^+R</math>, where <math>(\cdot)^+</math> denotes the [[Moore–Penrose pseudoinverse]].
 
In other words, if <math>L</math> has low rank, we can take a sub-matrix <math>U = L_{I,J}</math> of the same rank, together with some rows <math>R</math> and columns <math>C</math> of <math>L</math> and use them to reconstruct <math>L</math>.
==See also==