Content deleted Content added
m Open access bot: url-access updated in citation with #oabot. |
ws |
||
(One intermediate revision by one other user not shown) | |||
Line 1:
A '''CUR matrix approximation''' is a set of three [[matrix (mathematics)|matrices]] that, when multiplied together, closely approximate a given matrix.<ref name=mahoney>{{cite journal|title=CUR matrix decompositions for improved data analysis|author=Michael W. Mahoney|author2=Petros Drineas|journal=Proceedings of the National Academy of Sciences |date=2009 |volume=106 |issue=3 |pages=697–702 |doi=10.1073/pnas.0803205106 |doi-access=free |pmid=19139392 |pmc=2630100 |bibcode=2009PNAS..106..697M }}</ref><ref>{{cite conference|title=Optimal CUR matrix decompositions| conference = STOC '14 Proceedings of the forty-sixth annual ACM symposium on Theory of Computing|last1= Boutsidis |first1= Christos |last2=Woodruff|first2=David P.|year=2014}}</ref><ref>{{cite conference|title=Low Rank Approximation with Entrywise L1-Norm Error| conference = STOC '17 Proceedings of the forty-ninth annual ACM symposium on Theory of Computing|last1=Song|first1=Zhao|last2=Woodruff|first2=David P.|last3=Zhong|first3=Peilin|year=2017| arxiv = 1611.00898}}</ref> A CUR approximation can be used in the same way as the [[low-rank approximation]] of the [[singular value decomposition]] (SVD). CUR approximations are less accurate than the SVD, but they offer two key advantages, both stemming from the fact that the rows and columns come from the original matrix (rather than left and right singular vectors):
* There are methods to calculate it with lower asymptotic time complexity versus the SVD.
Line 23:
The authors show that
taking <math>|J| \approx k /\varepsilon^4</math> and <math>|I| \approx k / \varepsilon^2</math> where <math>0 \le \varepsilon</math>, the algorithm achieves Frobenius error bound <math>\|A - CUR\|_F \le \|A - A_k\|_F + \varepsilon \|A\|_F</math>, where <math>A_k</math> is the optimal rank ''k'' approximation.
==See also==
|