CUR matrix approximation: Difference between revisions

Content deleted Content added
No edit summary
m arxivify URL / redundant url using AWB
Line 1:
A '''CUR matrix approximation''' is a set of three [[Matrix (mathematics)|matrices]] that, when multiplied together, closely approximate a given matrix.<ref name=mahoney>{{cite web|title=CUR matrix decompositions for improved data analysis|url=http://www.pnas.org/content/106/3/697.full|accessdate=26 June 2012|author=Michael W. Mahoney|author2=Petros Drineas}}</ref> <ref>{{cite conference|title=Optimal CUR matrix decompositions| conference = STOC '14 Proceedings of the forty-sixth annual ACM symposium on Theory of Computing|last1= Boutsidis |first1= Christos |last2=Woodruff|first2=David P.|year=2014}}</ref> <ref>{{cite conference|title=Low Rank Approximation with Entrywise L1-Norm Error| conference = STOC '17 Proceedings of the forty-ninth annual ACM symposium on Theory of Computing|url=https://arxiv.org/pdf/1611.00898.pdf|last1=Song|first1=Zhao|last2=Woodruff|first2=David P.|last3=Zhong|first3=Peilin|year=2017}}</ref> A CUR approximation can be used in the same way as the [[low-rank approximation]] of the [[Singular value decomposition]] (SVD). CUR approximations are less accurate than the SVD, but they offer two key advantages, both stemming from the fact that the rows and columns come from the original matrix (rather than left and right singular vectors):
 
* There are methods to calculate it with lower asymptotic time complexity versus the SVD.
Line 13:
 
==Tensor==
Tensor-CURT decomposition<ref>{{cite arXiv|title=Relative Error Tensor Low Rank Approximation|eprint=1704.08246|urlarxiv = https://arxiv.org/abs/1704.08246|last1=Song|first1=Zhao|last2=Woodruff|first2=David P.|last3=Zhong|first3=Peilin|class=cs.DS|year=2017}}</ref>
is a generalization of matrix-CUR decomposition. Formally, a CURT tensor approximation of a tensor ''A'' is three matrices and a (core-)tensor ''C'', ''R'', ''T'' and ''U'' such that ''C'' is made from columns of ''A'', ''R'' is made from rows of ''A'', ''T'' is made from tubes of ''A'' and that the product ''U(C,R,T)'' (where the <math>i,j,l</math>-th entry of it is <math>\sum_{i',j',l'}U_{i',j',l'}C_{i,i'}R_{j,j'}T_{l,l'} </math>) closely approximates ''A''. Usually the CURT is selected to be a [[Rank (linear algebra)|rank]]-''k'' approximation, which means that ''C'' contains ''k'' columns of ''A'', ''R'' contains ''k'' rows of ''A'', ''T'' contains tubes of ''A'' and ''U'' is a ''k''-by-''k''-by-''k'' (core-)tensor.
 
 
 
 
 
==See also==