Content deleted Content added
No edit summary |
|||
(18 intermediate revisions by the same user not shown) | |||
Line 2:
In [[multilinear algebra]], the '''higher-order singular value decomposition''' ('''HOSVD''') is a misnomer. There does not exist a single tensor decomposition that retains all the defining properties of the matrix SVD. The matrix SVD simultaneously yields a
* ''rank-𝑅'' decomposition and
* orthonormal subspaces for the row and column spaces
These properties are not realized within a single algorithm for higher-order tensors, but are instead realized by two distinct algorithmic developments and represent two distinct research directions. Harshman, as well as, the team of Carol and Chang proposed [[Canonical polyadic decomposition]] (CPD), which is a variant of the [[tensor rank decomposition]], in which a tensor is approximated as a sum of ''K rank-1'' tensors for a user-specified ''K''. [[L. R. Tucker]] proposed a strategy for computing orthonormal subspaces for third order tensors. Aspecsts of these algorithms can be traced as far back as [[F. L. Hitchcock]] in 1928.<ref name=":0">{{Cite journal|last=Hitchcock|first=Frank L|date=1928-04-01|title=Multiple Invariants and Generalized Rank of a M-Way Array or Tensor|journal=Journal of Mathematics and Physics|language=en|volume=7|issue=1–4|pages=39–79|doi=10.1002/sapm19287139|issn=1467-9590}}</ref>
Line 12:
|___location=Copenhagen, Denmark
|year=2002
|url= https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=67eb22ae0baf86af77188fc0ab27edacf07a9140}}</ref><ref name=Vasilescu2003/><ref name=":Vasilescu2005">{{cite conference
|author=M. A. O. Vasilescu, D. Terzopoulos
|title=Multilinear Independent Component Analysis
|book-title=Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05)
|___location=San Diego, CA
|year=2005}}</ref> introduced algorithmic clarity. Vasilescu and Terzopoulos<ref name=":Vasilescu2002"/><ref name=":Vasilescu2005"/>
introduced the '''M-mode SVD''', which is the classic algorithm that is currently referred in the literature as the '''Tucker''' or the '''HOSVD'''.
: This misattribution has had lasting impact on the scholarly record, obscuring the original source of a widely adopted algorithm, and complicating efforts to trace its development, reproduce results, and recognizing the respective contributions of different research efforts.
The term '''M-mode SVD''' accurately reflects the algorithm employed. It captures the actual computation, a set of SVDs on mode-flattenings without making assumptions about the structure of the core tensor or implying a rank decomposition.
Robust and L1-norm-based variants of this decomposition framework have since been proposed.<ref name="robustHOSVD">{{Cite journal|last1=Godfarb|first1=Donald|last2=Zhiwei|first2=Qin|title=Robust low-rank tensor recovery: Models and algorithms|journal=SIAM Journal on Matrix Analysis and Applications|volume=35|number=1|pages=225–253|date=2014|doi=10.1137/130905010|arxiv=1311.6182|s2cid=1051205}}</ref><ref name="l1tucker">{{cite journal|last1=Chachlakis|first1=Dimitris G.|last2=Prater-Bennette|first2=Ashley|last3=Markopoulos|first3=Panos P.|title=L1-norm Tucker Tensor Decomposition|journal=IEEE Access|date=22 November 2019|volume=7|pages=178454–178465|doi=10.1109/ACCESS.2019.2955134|doi-access=free|arxiv=1904.06455|bibcode=2019IEEEA...7q8454C}}</ref><ref name="l1tucker3">{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Papalexakis|first3=Evangelos|title=The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition|journal=IEEE Signal Processing Letters|volume=25|issue=4|date=April 2018|pages=511–515|doi=10.1109/LSP.2018.2790901|arxiv=1710.11306|bibcode=2018ISPL...25..511M|s2cid=3693326}}</ref><ref name="l1tucker2">{{cite book|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Prater-Bennette|first3=Ashley|title=2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) |chapter=L1-Norm Higher-Order Singular-Value Decomposition |date=21 February 2019|pages=1353–1357|doi=10.1109/GlobalSIP.2018.8646385|isbn=978-1-7281-1295-4|s2cid=67874182}}</ref>
Line 27 ⟶ 29:
For the purpose of this article, the abstract tensor <math>\mathcal{A}</math> is assumed to be given in coordinates with respect to some basis as a [[Tensor#As multidimensional arrays|M-way array]], also denoted by <math>\mathcal{A}\in\mathbb{C}^{I_1 \times I_2 \cdots \times \cdots I_m \cdots\times I_M}</math>, where ''M'' is the number of modes and the order of the tensor. <math>\mathbb{C}</math> is the complex numbers and it includes both the real numbers <math>\mathbb{R}</math> and the pure imaginary numbers.
Let <math>\mathcal{A}_{[m]}\in\mathbb{C}^{I_m \times (I_1 I_2 \cdots I_{m-1} I_{m+1} \cdots I_M)}</math> denote the [[Tensor reshaping#Mode-m Flattening / Mode-m Matrixization|
}</math> combined. Let <math>{\bf U}_m \in \mathbb{C}^{I_m \times I_m}</math>be a [[unitary matrix]] containing a basis of the left singular vectors of the <math>\mathcal{A}_{[m]}</math> such that the ''j''th column <math>\mathbf{u}_j</math> of <math>{\bf U}_m</math> corresponds to the ''j''th largest singular value of <math>\mathcal{A}_{[m]}</math>. Observe that the '''mode/factor matrix''' <math>{\bf U}_m</math> does not depend on the particular on the specific definition of the mode ''m'' flattening. By the properties of the [[multilinear multiplication]], we have<math display="block">\begin{array}{rcl} \mathcal{A}
&=& \mathcal{A}\times ({\bf I}, {\bf I}, \ldots, {\bf I}) \\
Line 57 ⟶ 60:
=== Classic computation ===
While De Lathauwer et al. clarified Tucker’s
=== M-mode SVD (also referred to as HOSVD or Tucker)===
What is commonly referred to as the HOSVD or Tucker was developed by [[Vasilescu]] and [[Demetri Terzopoulos|Terzopoulos]] under
the name M-mode SVD.<ref name=":Vasilescu2002"
* For <math>m=1,\ldots,M</math>, do the following:
# Construct the mode-''m'' flattening <math>\mathcal{A}_{[m]}</math>;
Line 69 ⟶ 72:
=== Interlacing computation ===
A strategy that is significantly faster when some or all <math>R_m \ll I_m </math> consists of interlacing the computation of the core tensor and the factor matrices, as follows:<ref name=Vasilescu2003>{{cite conference
A strategy that is significantly faster when some or all <math>R_m \ll I_m </math> consists of interlacing the computation of the core tensor and the factor matrices, as follows:<ref name=":4">{{Cite journal|last1=Vannieuwenhoven|first1=N.|last2=Vandebril|first2=R.|last3=Meerbergen|first3=K.|date=2012-01-01|title=A New Truncation Strategy for the Higher-Order Singular Value Decomposition|journal=SIAM Journal on Scientific Computing|volume=34|issue=2|pages=A1027–A1052|doi=10.1137/110836067|bibcode=2012SJSC...34A1027V |s2cid=15318433 |issn=1064-8275|url=https://lirias.kuleuven.be/handle/123456789/337210}}</ref><ref name=":5">{{Cite book|title=Tensor Spaces and Numerical Tensor Calculus {{!}} SpringerLink|volume = 42|last=Hackbusch|first=Wolfgang|language=en-gb|doi=10.1007/978-3-642-28027-6|series = Springer Series in Computational Mathematics|year = 2012|isbn = 978-3-642-28026-9| s2cid=117253621 }}</ref><ref name=":fist_hosvd">{{Cite conference|last1=Cobb|first1=Benjamin|last2=Kolla|first2=Hemanth|last3=Phipps|first3=Eric|last4=Çatalyürek|first4=Ümit V.|date=2022|title=FIST-HOSVD: Fused in-Place Sequentially Truncated Higher Order Singular Value Decomposition|conference=Platform for Advanced Scientific Computing(PASC) |language=en|isbn=9781450394109|doi=10.1145/3539781.3539798|doi-access=free}}</ref>▼
|title=Multilinear Subspace Analysis for Image Ensembles
|last1=Vasilescu
|first1=M.A.O.
|last2=Terzopoulos
|first2=D.
|book-title=Proc. Computer Vision and Pattern Recognition Conf. (CVPR '03)
|volume=2
|___location=Madison, WI
|year=2003
▲
* Set <math>\mathcal{A}^0 = \mathcal{A}</math>;
Line 88 ⟶ 101:
* Compute a rank-<math>\bar R_m </math> truncated SVD <math>\mathcal{A}_{[m]} \approx U_m \Sigma_m V^T_m </math>, and store the top <math>\bar R_m </math> left singular vectors <math>U_m \in F^{I_m \times \bar R_m}</math>;
while a '''sequentially truncated M-mode SVD (HOSVD)''' (or '''successively truncated M-mode SVD(HOSVD)''') is obtained by replacing step 2 in the interlaced computation by
* Compute a rank-<math>\bar R_m </math> truncated SVD <math>\mathcal{A}_{[m]}^{m-1} \approx U_m \Sigma_m V^T_m </math>, and store the top <math>\bar R_m </math> left singular vectors <math>U_m \in F^{I_m \times \bar R_m}</math>. Unfortunately, truncation does not result in an optimal solution for the best low multilinear rank optimization problem,.<ref name=":2" /><ref name=
== Applications ==
Line 103 ⟶ 116:
The concept of M-mode SVD (HOSVD) was carried over to functions by Baranyi and Yam via the [[TP model transformation]].<ref name="Baranyi042" /><ref name="compind2" /> This extension led to the definition of the M-mode SVD/HOSVD canonical form of tensor product functions and Linear Parameter Varying system models<ref name="canon12">{{cite conference|title=Definition of the HOSVD-based canonical form of polytopic dynamic models|author1=P. Baranyi|author2=L. Szeidl|author3=P. Várlaki|author4=Y. Yam|date=July 3–5, 2006|___location=Budapest, Hungary|pages=660–665|conference=3rd International Conference on Mechatronics (ICM 2006)}}</ref> and to convex hull manipulation based control optimization theory, see [[TP model transformation in control theories]].
M-mode SVD (HOSVD) was proposed to be applied to multi-
== Robust L1-norm variant ==
Line 111 ⟶ 124:
<references />
{{DEFAULTSORT:
[[Category:Multilinear algebra]]
[[Category:Tensors]]
|