Higher-order singular value decomposition: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 1:
{{Short description|Tensor decomposition}}
In [[multilinear algebra]], the '''higher-order singular value decomposition''' ('''HOSVD''') is a misnomer since the resulting tensor decomposition does not retain all the defining properties of the matrix SVD. The matrix SVD simultaneously yields a rank-𝑅 decomposition and computes orthonormal subspaces for both the row and column spaces and computes a diagonal matrix. These properties are not realized within a single algorithm for higher-order tensors, but are instead realized by two distinct algorithmic developments and the efforts of two distinct research directions. Harshman, as well as, the team of Carol and Chang proposed [[Canonical polyadic decomposition]] (CPD), which is a variant of the [[tensor rank decomposition]], in which the tensor is approximated as a sum of K rank-1 tensors for a user-specified ''K''. [[L. R. Tucker]] proposed a strategy for computing orthonormal subspaces for third order tensors. Aspecsts of these algorithms can be traced as far back as [[F. L. Hitchcock]] in 1928.<ref name=":0">{{Cite journal|last=Hitchcock|first=Frank L|date=1928-04-01|title=Multiple Invariants and Generalized Rank of a M-Way Array or Tensor|journal=Journal of Mathematics and Physics|language=en|volume=7|issue=1–4|pages=39–79|doi=10.1002/sapm19287139|issn=1467-9590}}</ref>
 
 
[[Lieven De Lathauwer |De Lathauwer]] ''et al.''<ref name=":2">{{Cite journal|last1=De Lathauwer|first1=L.|last2=De Moor|first2=B.|last3=Vandewalle|first3=J.|date=2000-01-01|title=On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors|journal=SIAM Journal on Matrix Analysis and Applications|volume=21|issue=4|pages=1324–1342|doi=10.1137/S0895479898346995|issn=0895-4798|citeseerx=10.1.1.102.9135}}</ref><ref name="DeLathauwerSVDDeLathauwer00">{{Cite journal|last1=De Lathauwer|first1=L.|last2=De Moor|first2=B.|last3=Vandewalle|first3=J.|date=2000-01-01|title=A Multilinear Singular Value Decomposition|journal=SIAM Journal on Matrix Analysis and Applications|volume=21|issue=4|pages=1253–1278|doi=10.1137/s0895479896305696|issn=0895-4798|citeseerx=10.1.1.102.9135}}</ref> introduced clarity to Tucker's concepts with two highly influential papers, while [[Vasilescu]] and [[Demetri Terzopoulos| Terzopoulos]] introduced algorithmic clarity.<ref name=":Vasilescu2002">M. A. O. Vasilescu, D. Terzopoulos (2002), "Multilinear Analysis of Image Ensembles: TensorFaces," Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark. {{Webarchive|url=https://web.archive.org/web/20221229090931/http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdfcite |date=2022-12-29}}</ref><ref name="Vasilescu2003">M. A. O. Vasilescu, D. Terzopoulos (2003), "Multilinear Subspace Analysis of Image Ensembles," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI.</ref><ref name=":Vasilescu2005">M. A. O. Vasilescu, D. Terzopoulos (2005), "Multilinear Independent Component Analysis," Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA.</ref> They synthesized those ideas into an elegant two-step algorithm, one whose simplicity belies the complexity it resolves. Vasilescu and Terzopoulos introduced the '''M-mode SVD''' which is currently referred in the literature as the '''Tucker''' or the '''HOSVD'''. However, the Tucker algorithm, and De Lathauwer ''et al.'' algorithm are sequential, relying on iterative methods such as gradient descent or the power method, respectively. The term '''M-mode SVD''' accurately reflects the algorithm; it captures the actual computation (a set of SVDs on mode-flattenings) without making assumptions about the structure of the core tensor. It can be computed sequentially, but is also well-suited for parallel computation.conference
|author=M. A. O. Vasilescu, D. Terzopoulos
|title=Multilinear Analysis of Image Ensembles: TensorFaces
|book-title=Proc. 7th European Conference on Computer Vision (ECCV'02)
|___location=Copenhagen, Denmark
|year=2002
|url= https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=67eb22ae0baf86af77188fc0ab27edacf07a9140}}</ref><ref name="Vasilescu2003">{{cite conference
|author=M. A. O. Vasilescu, D. Terzopoulos
|title=Multilinear Subspace Analysis of Image Ensembles
|book-title=Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’03)
|___location=Madison, WI
|year(2003]]</ref><ref name=":Vasilescu2005">{{cite conference
|author==M. A. O. Vasilescu, D. Terzopoulos
|title=Multilinear Independent Component Analysis
|book-title=Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05)
|___location=San Diego, CA
|year=2005}}</ref> introduced algorithmic clarity. They synthesized a set of ideas into an elegant two-step algorithm—one whose simplicity belies the complexity it resolves. Vasilescu and Terzopoulos introduced the '''M-mode SVD''' which is currently referred in the literature as the '''Tucker''' or the '''HOSVD'''. However, the Tucker algorithm, and De Lathauwer ''et al.'' companion algorithm<ref name=":2"/> are sequential, relying on iterative methods such as gradient descent or the power method, respectively. The term '''M-mode SVD''' accurately reflects the algorithm employed without overpromising; it captures the actual computation (a set of SVDs on mode-flattenings) without making assumptions about the structure of the core tensor or implying a rank decomposition. It can be computed sequentially, but is also well-suited for parallel computation.
 
: This misattribution has had lasting impact on the scholarly record, obscuring the original source of a widely adopted algorithm, and complicating efforts to trace its development, reproduce results, and recognizing the respective contributions of different research efforts.
 
Robust and L1-norm-based variants of this decomposition framework have since been proposed.<ref name="robustHOSVD">{{Cite journal|last1=Godfarb|first1=Donald|last2=Zhiwei|first2=Qin|title=Robust low-rank tensor recovery: Models and algorithms|journal=SIAM Journal on Matrix Analysis and Applications|volume=35|number=1|pages=225–253|date=2014|doi=10.1137/130905010|arxiv=1311.6182|s2cid=1051205}}</ref><ref name="l1tucker">{{cite journal|last1=Chachlakis|first1=Dimitris G.|last2=Prater-Bennette|first2=Ashley|last3=Markopoulos|first3=Panos P.|title=L1-norm Tucker Tensor Decomposition|journal=IEEE Access|date=22 November 2019|volume=7|pages=178454–178465|doi=10.1109/ACCESS.2019.2955134|doi-access=free|arxiv=1904.06455|bibcode=2019IEEEA...7q8454C}}</ref><ref name="l1tucker3">{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Papalexakis|first3=Evangelos|title=The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition|journal=IEEE Signal Processing Letters|volume=25|issue=4|date=April 2018|pages=511–515|doi=10.1109/LSP.2018.2790901|arxiv=1710.11306|bibcode=2018ISPL...25..511M|s2cid=3693326}}</ref><ref name="l1tucker2">{{cite book|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Prater-Bennette|first3=Ashley|title=2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) |chapter=L1-Norm Higher-Order Singular-Value Decomposition |date=21 February 2019|pages=1353–1357|doi=10.1109/GlobalSIP.2018.8646385|isbn=978-1-7281-1295-4|s2cid=67874182}}</ref>
Line 96 ⟶ 112:
<references />
 
{{DEFAULTSORT:HigherM-mode Order Singular Value DecompositionSVD}}
[[Category:Multilinear algebra]]
[[Category:Tensors]]