Content deleted Content added
No edit summary |
No edit summary |
||
Line 7:
Although the term '''HOSVD''' was coined by De Lathauwer, the algorithm most commonly referred to as the '''Tucker''' or Higher-Order Singular Value Decomposition ('''HOSVD''') in the literature was originally introduced by Vasilescu and Terzopoulos under the name '''M-mode SVD.'''<ref name=":Vasilescu2002">M. A. O. Vasilescu, D. Terzopoulos (2002), "Multilinear Analysis of Image Ensembles: TensorFaces," Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark. {{Webarchive|url=https://web.archive.org/web/20221229090931/http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |date=2022-12-29}}</ref><ref name="Vasilescu2003">M. A. O. Vasilescu, D. Terzopoulos (2003), "Multilinear Subspace Analysis of Image Ensembles," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI.</ref><ref name=":Vasilescu2005">M. A. O. Vasilescu, D. Terzopoulos (2005), "Multilinear Independent Component Analysis," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA.</ref>.
The M-mode SVD is a simple and elegant algorithm suitable for parallel computation. The algorithm developed by Tucker/Kroonenberg and
: This misattribution has had lasting impact on the scholarly record, obscuring the original source of a widely adopted algorithm, and complicating efforts to trace its development, reproduce results, or properly credit foundational contributions in multilinear algebra and tensor methods.
Line 46:
=== Classic computation ===
The strategy for computing the Multilinear SVD and M-mode SVD was first introduced by [[L. R. Tucker]] in the 1960s,<ref name="Tucker1963" /> and later extended by [[Lieven De Lathauwer|L. De Lathauwer]] ''et al.''<ref name=":2" /> and by Vasilescu and Terzopoulos.<ref name=":Vasilescu2002" /><ref name=":Vasilescu2005" /> The algorithm widely referred to in the literature as the '''Tucker''' or '''HOSVD''' decomposition was developed by Vasilescu and Terzopoulos under the name '''M-mode SVD'''. The algorithm '''M-mode SVD'''<ref name=":Vasilescu2002" /><ref name=":Vasilescu2005" /> is a simple and elegant algorithm that is suitable for parallel computation, while the
|author= L. R. Tucker
|title= Some mathematical notes on three-mode factor analysis
Line 52:
|pages=31:279–311
|year=1966
}}</ref> or
|title=On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors
|author= Lieven De Lathauwer, Bart De Moor, Joos Vandewalle
Line 64:
=== M-mode SVD (also incorrectly referred to as HOSVD or Tucker)===
What is commonly referred to as the HOSVD or Tucker was developed by [[
the name M-mode SVD<ref name=":Vasilescu2002"/><ref name=Vasilescu2003/><ref name=":Vasilescu2005"/>.
* For <math>m=1,\ldots,M</math>, do the following:
# Construct the mode-''m'' flattening <math>\mathcal{A}_{[m]}</math>;
Line 90 ⟶ 89:
\quad\text{s.t.}\quad \mathrm{rank-}(\bar R_1, \bar R_2, \ldots, \bar R_M), </math>where <math>(\bar R_1, \bar R_2, \ldots, \bar R_M) \in \mathbb{N}^M </math> is the reduced multilinear rank with <math>1 \le \bar R_m < R_m \le I_m </math>, and the norm <math>\|.\|_F</math> is the [[Frobenius norm]].
A simple idea for trying to solve this optimization problem is to truncate the (compact) SVD in step 2 of either the classic or the interlaced computation. A '''classically''' '''truncated M-mode SVD/HOSVD''' is obtained by replacing step 2 in the classic computation by
* Compute a rank-<math>\bar R_m </math> truncated SVD <math>\mathcal{A}_{[m]} \approx U_m \Sigma_m V^T_m </math>, and store the top <math>\bar R_m </math> left singular vectors <math>U_m \in F^{I_m \times \bar R_m}</math>;
while a '''sequentially truncated HOSVD''' (or '''successively truncated M-mode SVD/HOSVD''') is obtained by replacing step 2 in the interlaced computation by
* Compute a rank-<math>\bar R_m </math> truncated SVD <math>\mathcal{A}_{[m]}^{m-1} \approx U_m \Sigma_m V^T_m </math>, and store the top <math>\bar R_m </math> left singular vectors <math>U_m \in F^{I_m \times \bar R_m}</math>. Unfortunately, truncation does not result in an optimal solution for the best low multilinear rank optimization problem,.<ref name=":2" /><ref name=":Vasilescu2002"/><ref name=":4" /><ref name=":fist_hosvd" /> However, both the classically and interleaved truncated HOSVD result in a '''quasi-optimal''' solution:<ref name=":4" /><ref name=":fist_hosvd" /><ref name=":5" /><ref>{{Cite journal|last=Grasedyck|first=L.|date=2010-01-01|title=Hierarchical Singular Value Decomposition of Tensors|journal=SIAM Journal on Matrix Analysis and Applications|volume=31|issue=4|pages=2029–2054|doi=10.1137/090764189|issn=0895-4798|citeseerx=10.1.1.660.8333}}</ref> if <math>\mathcal{\bar A}_t </math> denotes the classically or sequentially truncated HOSVD and <math>\mathcal{\bar A}^* </math> denotes the optimal solution to the best low multilinear rank approximation problem, then<math display="block">\| \mathcal{A} - \mathcal{\bar A}_t \|_F \le \sqrt{M} \| \mathcal{A} - \mathcal{\bar A}^* \|_F; </math>in practice this means that if there exists an optimal solution with a small error, then a truncated HOSVD will for many intended purposes also yield a sufficiently good solution.
|