Higher-order singular value decomposition: Difference between revisions

Content deleted Content added
No edit summary
Line 1:
{{Short description|Tensor decomposition}}
In [[multilinear algebra]], the '''higher-order singular value decomposition''' ('''HOSVD''') ofis a [[misnomer since the resulting tensor]] isdecomposition does not retain all the defining properties of the matrix SVD. The matrix SVD simultaneously yields a specificrank-𝑅 orthogonaldecomposition [[Tuckerand decomposition]]orthonormal subspaces for both the row and column spaces. ItThese mayproperties beare not realized within a single algorithm for higher-order tensors, but are instead distributed across two distinct algorithmic regardeddevelopments and two research directions. Harshman, as onewell typeas, ofthe generalizationteam of theCarol matrixand [[singularChang valueproposed Canonical polyadic decomposition]]. It(CPD), haswhich applicationsis ina [[computervariant vision]],of the [[computertensor graphicsrank decomposition]], in which the tensor is approximated as a sum of K rank-1 tensors for a user-specified K. [[machineL. learningR. Tucker]], [[scientificproposed a strategy for computing]], andorthonormal [[signalsubspaces processing]]for third order tensors. Some aspects can be traced as far back as [[F. L. Hitchcock]] in 1928,<ref name=":0">{{Cite journal|last=Hitchcock|first=Frank L|date=1928-04-01|title=Multiple Invariants and Generalized Rank of a M-Way Array or Tensor|journal=Journal of Mathematics and Physics|language=en|volume=7|issue=1–4|pages=39–79|doi=10.1002/sapm19287139|issn=1467-9590}}</ref> but it was [[L. R. Tucker]] who developed for third-order tensors the general [[Tucker decomposition]] in the 1960s,<ref name=":1">{{Cite journal|last=Tucker|first=Ledyard R.|date=1966-09-01|title=Some mathematical notes on three-mode factor analysis|journal=Psychometrika|language=en|volume=31|issue=3|pages=279–311|doi=10.1007/bf02289464|pmid=5221127|s2cid=44301099|issn=0033-3123}}</ref><ref name="Tucker1963">{{Cite journal|last=Tucker|first=L. R.|date=1963|title=Implications of factor analysis of three-way matrices for measurement of change|journal=In C. W. Harris (Ed.), Problems in Measuring Change. Madison, Wis.: Univ. Wis. Press.|pages=122–137}}</ref>
<ref name="Tucker1964">{{Cite journal|last=Tucker|first=L. R.|date=1964|title=The extension of factor analysis to three-dimensional matrices|journal=In N. Frederiksen and H. Gulliksen (Eds.), Contributions to Mathematical Psychology. New York: Holt, Rinehart and Winston|pages=109–127}}</ref> further advocated by [[Lieven De Lathauwer|L. De Lathauwer]] ''et al.''
<ref name=":2">{{Cite journal|last1=De Lathauwer|first1=L.|last2=De Moor|first2=B.|last3=Vandewalle|first3=J.|date=2000-01-01|title=On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors|journal=SIAM Journal on Matrix Analysis and Applications|volume=21|issue=4|pages=1324–1342|doi=10.1137/S0895479898346995|issn=0895-4798|citeseerx=10.1.1.102.9135}}</ref>
,<ref name="DeLathauwerSVD">{{Cite journal|last1=De Lathauwer|first1=L.|last2=De Moor|first2=B.|last3=Vandewalle|first3=J.|date=2000-01-01|title=A Multilinear Singular Value Decomposition|journal=SIAM Journal on Matrix Analysis and Applications|volume=21|issue=4|pages=1253–1278|doi=10.1137/s0895479896305696|issn=0895-4798|citeseerx=10.1.1.102.9135}}</ref> or advocated by Vasilescu and Terzopoulos.
 
Although the term '''HOSVD''' was coined by De Lathauwer, the algorithm most commonly referred to as the '''Tucker''' or Higher-Order Singular Value Decomposition ('''HOSVD''') in the literature was originally introduced by Vasilescu and Terzopoulos under the name '''M-mode SVD.'''<ref name=":Vasilescu2002">M. A. O. Vasilescu, D. Terzopoulos (2002), "Multilinear Analysis of Image Ensembles: TensorFaces," Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark. {{Webarchive|url=https://web.archive.org/web/20221229090931/http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |date=2022-12-29}}</ref><ref name="Vasilescu2003">M. A. O. Vasilescu, D. Terzopoulos (2003), "Multilinear Subspace Analysis of Image Ensembles," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI.</ref><ref name=":Vasilescu2005">M. A. O. Vasilescu, D. Terzopoulos (2005), "Multilinear Independent Component Analysis," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA.</ref>.
 
[[Lieven De Lathauwer |De Lathauwer]] ''et al.''<ref name=":2">{{Cite journal|last1=De Lathauwer|first1=L.|last2=De Moor|first2=B.|last3=Vandewalle|first3=J.|date=2000-01-01|title=On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors|journal=SIAM Journal on Matrix Analysis and Applications|volume=21|issue=4|pages=1324–1342|doi=10.1137/S0895479898346995|issn=0895-4798|citeseerx=10.1.1.102.9135}}</ref><ref name="DeLathauwerSVD">{{Cite journal|last1=De Lathauwer|first1=L.|last2=De Moor|first2=B.|last3=Vandewalle|first3=J.|date=2000-01-01|title=A Multilinear Singular Value Decomposition|journal=SIAM Journal on Matrix Analysis and Applications|volume=21|issue=4|pages=1253–1278|doi=10.1137/s0895479896305696|issn=0895-4798|citeseerx=10.1.1.102.9135}}</ref> provided clarity to Tucker's concepts through two influential papers, while [[Vasilescu]] and [[Demetri Terzopoulos| Terzopoulos]] provided algorithmic clarity. They synthesized those ideas into an elegant two-step algorithm, one whose simplicity belies the complexity it resolves.<ref name=":Vasilescu2002">M. A. O. Vasilescu, D. Terzopoulos (2002), "Multilinear Analysis of Image Ensembles: TensorFaces," Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark. {{Webarchive|url=https://web.archive.org/web/20221229090931/http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |date=2022-12-29}}</ref><ref name="Vasilescu2003">M. A. O. Vasilescu, D. Terzopoulos (2003), "Multilinear Subspace Analysis of Image Ensembles," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI.</ref><ref name=":Vasilescu2005">M. A. O. Vasilescu, D. Terzopoulos (2005), "Multilinear Independent Component Analysis," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA.</ref> Vasilescu and Terzopoulos introduced the '''M-mode SVD''' which is currently referred in the literature as the '''Tucker''' or the '''HOSVD'''. However, the Tucker/Kroonenberg algorithm, and De Lathauwer ''et al.'' algorithms are sequential, relying on iterative methods such as gradient descent or the power method, respectively. In contrast, the M-mode SVD is a closed-form solution that can be computed sequentially, but is also well-suited for parallel computation.
The M-mode SVD is a simple and elegant algorithm suitable for parallel computation. The algorithm developed by Tucker/Kroonenberg and De Lathauwer ''et al.''<ref name=":2"/> are sequential algorithms that employ gradient descent or the power method, respectively. The M-mode SVD parallel formulation is computationally distinct from prior approaches.
 
: This misattribution has had lasting impact on the scholarly record, obscuring the original source of a widely adopted algorithm, and complicating efforts to trace its development, reproduce results, orand properlyrecognizing creditthe foundationalrespective contributions inof multilinear algebra anddifferent tensorresearch methodsefforts.
 
Robust and L1-norm-based variants of this decomposition framework have since been proposed.<ref name="robustHOSVD">{{Cite journal|last1=Godfarb|first1=Donald|last2=Zhiwei|first2=Qin|title=Robust low-rank tensor recovery: Models and algorithms|journal=SIAM Journal on Matrix Analysis and Applications|volume=35|number=1|pages=225–253|date=2014|doi=10.1137/130905010|arxiv=1311.6182|s2cid=1051205}}</ref><ref name="l1tucker">{{cite journal|last1=Chachlakis|first1=Dimitris G.|last2=Prater-Bennette|first2=Ashley|last3=Markopoulos|first3=Panos P.|title=L1-norm Tucker Tensor Decomposition|journal=IEEE Access|date=22 November 2019|volume=7|pages=178454–178465|doi=10.1109/ACCESS.2019.2955134|doi-access=free|arxiv=1904.06455|bibcode=2019IEEEA...7q8454C}}</ref><ref name="l1tucker3">{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Papalexakis|first3=Evangelos|title=The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition|journal=IEEE Signal Processing Letters|volume=25|issue=4|date=April 2018|pages=511–515|doi=10.1109/LSP.2018.2790901|arxiv=1710.11306|bibcode=2018ISPL...25..511M|s2cid=3693326}}</ref><ref name="l1tucker2">{{cite book|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Prater-Bennette|first3=Ashley|title=2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) |chapter=L1-Norm Higher-Order Singular-Value Decomposition |date=21 February 2019|pages=1353–1357|doi=10.1109/GlobalSIP.2018.8646385|isbn=978-1-7281-1295-4|s2cid=67874182}}</ref>
Line 46 ⟶ 42:
 
=== Classic computation ===
While De Lathauwer et al. clarified Tucker’s framework through two influential papers, Vasilescu and Terzopoulos synthesized the ideas into an elegant two-step algorithm—one whose simplicity belies the complexity it resolves. The Tucker/Kroonenberg and De Lathauwer ''et al.''<ref name=:2/> algorithms are sequential, relying on iterative methods such as gradient descent or the power method. In contrast, the M-mode SVD is a closed-form solution that can be computed sequentially, but is also well-suited for parallel computation.
The strategy for computing the Multilinear SVD and M-mode SVD was first introduced by [[L. R. Tucker]] in the 1960s,<ref name="Tucker1963" /> and later extended by [[Lieven De Lathauwer|L. De Lathauwer]] ''et al.''<ref name=":2" /> and by Vasilescu and Terzopoulos.<ref name=":Vasilescu2002" /><ref name=":Vasilescu2005" /> The algorithm widely referred to in the literature as the '''Tucker''' or '''HOSVD''' decomposition was developed by Vasilescu and Terzopoulos under the name '''M-mode SVD'''. The algorithm '''M-mode SVD'''<ref name=":Vasilescu2002" /><ref name=":Vasilescu2005" /> is a simple and elegant algorithm that is suitable for parallel computation, while the algorithms developed by Tucker<ref name="Tucker1963" /><ref name=tucker1966>{{cite journal
|author= L. R. Tucker
|title= Some mathematical notes on three-mode factor analysis
|journal=Psychometrika
|pages=31:279–311
|year=1966
}}</ref> or by De Lathauwer ''et al.'',<ref name=delathauwer>{{cite journal
|title=On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors
|author= Lieven De Lathauwer, Bart De Moor, Joos Vandewalle
|date=2000
|journal=SIAM journal on Matrix Analysis and Applications
|volume=21
|issue=4
|pages=1324-1342
|publisher=Society for Industrial and Applied Mathematics
}}</ref><ref name=":2" /> are sequential algorithms that use gradient descent and the power method, respectively.
 
=== M-mode SVD (also referred to as HOSVD or Tucker)===