Higher-order singular value decomposition: Difference between revisions

Content deleted Content added
 
(4 intermediate revisions by the same user not shown)
Line 12:
|___location=Copenhagen, Denmark
|year=2002
|url= https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=67eb22ae0baf86af77188fc0ab27edacf07a9140}}</ref><ref name=Vasilescu2003/><ref name=":Vasilescu2005">{{cite conference
|author=M. A. O. Vasilescu, D. Terzopoulos
|title=Multilinear Independent Component Analysis
Line 60:
 
=== Classic computation ===
While De Lathauwer et al. clarified Tucker’s concepts through two influential papers, Vasilescu and Terzopoulos synthesizedprovided the ideas into an elegant two-step algorithm—one whose simplicity belies the complexity italgoritmic resolvesclarity. The Tucker algorithm and De Lathauwer ''et al.''<ref name=:2/> companion algorithm are sequential, relying on iterative methods such as gradient descent or the power method. In contrast, the '''M-mode SVD''' is acomputes the othonormal subspaces in closed-form solution that can be computedexecuted sequentially, but it is also well-suited for parallel computation.
 
=== M-mode SVD (also referred to as HOSVD or Tucker)===
Line 72:
 
=== Interlacing computation ===
A strategy that is significantly faster when some or all <math>R_m \ll I_m </math> consists of interlacing the computation of the core tensor and the factor matrices, as follows:<ref name=Vasielscu2003Vasilescu2003>{{cite conference
|title=Multilinear Subspace Analysis for Image Ensembles
|last1=Vasilescu
Line 101:
* Compute a rank-<math>\bar R_m </math> truncated SVD <math>\mathcal{A}_{[m]} \approx U_m \Sigma_m V^T_m </math>, and store the top <math>\bar R_m </math> left singular vectors <math>U_m \in F^{I_m \times \bar R_m}</math>;
while a '''sequentially truncated M-mode SVD (HOSVD)''' (or '''successively truncated M-mode SVD(HOSVD)''') is obtained by replacing step 2 in the interlaced computation by
* Compute a rank-<math>\bar R_m </math> truncated SVD <math>\mathcal{A}_{[m]}^{m-1} \approx U_m \Sigma_m V^T_m </math>, and store the top <math>\bar R_m </math> left singular vectors <math>U_m \in F^{I_m \times \bar R_m}</math>. Unfortunately, truncation does not result in an optimal solution for the best low multilinear rank optimization problem,.<ref name=":2" /><ref name=":Vasilescu2002"Vasilescu2003/><ref name=":4" /><ref name=":fist_hosvd" /> However, both the classically and interleaved truncated M-mode SVD/HOSVD result in a '''quasi-optimal''' solution:<ref name=":4" Vasilescu2003/><ref name=Vasilescu2003":4" /><ref name=":5" /><ref>{{Cite journal|last=Grasedyck|first=L.|date=2010-01-01|title=Hierarchical Singular Value Decomposition of Tensors|journal=SIAM Journal on Matrix Analysis and Applications|volume=31|issue=4|pages=2029–2054|doi=10.1137/090764189|issn=0895-4798|citeseerx=10.1.1.660.8333}}</ref> if <math>\mathcal{\bar A}_t </math> denotes the classically or sequentially truncated M-mode SVD(HOSVD) and <math>\mathcal{\bar A}^* </math> denotes the optimal solution to the best low multilinear rank approximation problem, then<math display="block">\| \mathcal{A} - \mathcal{\bar A}_t \|_F \le \sqrt{M} \| \mathcal{A} - \mathcal{\bar A}^* \|_F; </math>in practice this means that if there exists an optimal solution with a small error, then a truncated M-mode SVD/HOSVD will for many intended purposes also yield a sufficiently good solution.
 
== Applications ==