Content deleted Content added
Tom.Reding (talk | contribs) m Rep typographic ligatures like "fi" with plain text; possible ref cleanup; WP:GenFixes on, replaced: fi → fi using AWB |
No edit summary |
||
(32 intermediate revisions by 20 users not shown) | |||
Line 1:
{{short description|Multilinear extension of principal component analysis}}
'''Multilinear principal component analysis''' ('''MPCA''') is a [[Multilinear algebra|multilinear]] extension of [[principal component analysis]] (PCA)
* '''linear tensor models''', such as [[Tensor rank|CANDECOMP/Parafac]], or by
* '''multilinear tensor models''', such as '''multilinear principal component analysis (MPCA)
Circa 2001, Vasilescu reframed the data analysis, recognition and synthesis problems as multilinear tensor problems based on the insight that most observed data are the compositional consequence of several causal factors of data formation, and are well suited for multi-modal data tensor analysis. The power of the tensor framework was showcased by analyzing human motion joint angles, facial images or textures in terms of their causal factors of data formation in the following works: Human Motion Signatures▼
(CVPR 2001, ICPR 2002), face recognition - [[TensorFaces]],▼
(ECCV 2002, CVPR 2003, etc.) and computer graphics -- [[TensorTextures]]<ref name="Vasilescu2004"/>(Siggraph 2004).▼
Multilinear PCA may be applied to compute the causal factors of data formation, or as signal processing tool on data tensors whose individual observation have either been vectorized,<ref name="Vasilescu2002b"/><ref name="Vasilescu2002a">M.A.O. Vasilescu, [[Demetri Terzopoulos|D. Terzopoulos]] (2002) [http://www.media.mit.edu/~maov/tensorfaces/eccv02_corrected.pdf "Multilinear Analysis of Image Ensembles: TensorFaces," Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002, in Computer Vision – ECCV 2002, Lecture Notes in Computer Science, Vol. 2350, A. Heyden et al. (Eds.), Springer-Verlag, Berlin, 2002, 447–460. ]</ref><ref name="Vasilescu2003">M.A.O. Vasilescu, D. Terzopoulos (2003) [http://www.media.mit.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis for Image Ensembles,'' M. A. O. Vasilescu, D. Terzopoulos, Proc. Computer Vision and Pattern Recognition Conf. (CVPR '03), Vol.2, Madison, WI, June, 2003, 93–99.]</ref><ref name="Vasilescu2004">M.A.O. Vasilescu, D. Terzopoulos (2004) [http://www.media.mit.edu/~maov/tensortextures/Vasilescu_siggraph04.pdf "TensorTextures: Multilinear Image-Based Rendering", M. A. O. Vasilescu and D. Terzopoulos, Proc. ACM SIGGRAPH 2004 Conference Los Angeles, CA, August, 2004, in Computer Graphics Proceedings, Annual Conference Series, 2004, 336–342. ]</ref> or whose observations are treated as a collection of column/row observations, an "observation as a matrix", and concatenated into a data tensor. The latter approach is suitable for compression and reducing redundancy in the rows, columns and fibers that are unrelated to the causal factors of data formation.
▲Historically, MPCA has been referred to as "M-mode PCA", a terminology which was coined by Peter Kroonenberg in 1980.<ref name="Kroonenberg1980"/> In 2005, [[M. Alex O. Vasilescu|Vasilescu]] and [[Demetri Terzopoulos|Terzopoulos]] introduced the Multilinear PCA<ref name="MPCA-MICA2005">M. A. O. Vasilescu, D. Terzopoulos (2005) [http://www.media.mit.edu/~maov/mica/mica05.pdf "Multilinear Independent Component Analysis"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, June 2005, vol.1, 547-553."</ref> terminology as a way to better differentiate between linear and multilinear tensor decomposition, as well as, to better differentiate between the work<ref name="Vasilescu2002b"/><ref name="Vasilescu2002a"/><ref name="Vasilescu2003"/><ref name="Vasilescu2004"/> that computed 2nd order statistics associated with each data tensor mode(axis), and subsequent work on Multilinear Independent Component Analysis<ref name="MPCA-MICA2005"/> that computed higher order statistics associated with each tensor mode/axis.
Vasilescu and Terzopoulos in their paper "[[TensorFaces]]"<ref name=Vasilescu2002a/><ref name="Vasilescu2003"/> introduced the [[HOSVD| '''M-mode SVD''']] algorithm which are algorithms misidentified in the literature as the '''HOSVD'''<ref name=DeLathauwer2000b>{{cite journal | last1 = Lathauwer | first1 = L. D. | last2 = Moor | first2 = B. D. | last3 = Vandewalle | first3 = J. | year = 2000 | title = On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors | url = http://portal.acm.org/citation.cfm?id=354405 | journal = SIAM Journal on Matrix Analysis and Applications | volume = 21 | issue = 4| pages = 1324–1342 | doi = 10.1137/s0895479898346995 | url-access = subscription }}</ref><ref name="DeLathauwer2000a">{{cite journal | last1 = Lathauwer | first1 = L.D. | last2 = Moor | first2 = B.D. | last3 = Vandewalle | first3 = J. | year = 2000 | title = A multilinear singular value decomposition | url = http://portal.acm.org/citation.cfm?id=354398 | journal = SIAM Journal on Matrix Analysis and Applications | volume = 21 | issue = 4| pages = 1253–1278 | doi = 10.1137/s0895479896305696 | url-access = subscription }}</ref>
or the '''Tucker''' which employ the power method or gradient descent, respectively.
▲
▲(CVPR 2001, ICPR 2002), face recognition
▲(ECCV 2002, CVPR 2003, etc.) and computer graphics
== The algorithm ==
The MPCA solution follows the alternating least square (ALS) approach.
== Feature selection ==
MPCA features: Supervised MPCA
== Extensions ==
Various
*Robust MPCA (RMPCA)
*Multi-Tensor Factorization, that also finds the number of components automatically (MTF)
▲*Robust MPCA (RMPCA) <ref>K. Inoue, K. Hara, K. Urahama, "Robust multilinear principal component analysis", Proc. IEEE Conference on Computer Vision, 2009, pp. 591–597.</ref>
▲*Multi-Tensor Factorization, that also finds the number of components automatically (MTF) <ref>{{Cite journal|last=Khan|first=Suleiman A.|last2=Leppäaho|first2=Eemeli|last3=Kaski|first3=Samuel|date=2016-06-10|title=Bayesian multi-tensor factorization|url=https://link.springer.com/article/10.1007/s10994-016-5563-y|journal=Machine Learning|language=en|volume=105|issue=2|pages=233–253|doi=10.1007/s10994-016-5563-y|issn=0885-6125}}</ref>
==References==
Line 62 ⟶ 36:
[[Category:Dimension reduction]]
|