Principal component analysis: Difference between revisions

Content deleted Content added
m Limitations: improve wording, fix capitalization
m removed typo
 
(21 intermediate revisions by 13 users not shown)
Line 1:
{{Short description|Method of data analysis}}
[[File:GaussianScatterPCA.svg|thumb|upright=1.3|PCA of a [[multivariate Gaussian distribution]] centered at (1, 3) with a standard deviation of 3 in roughly the (0.866, 0.5) direction and of 1 in the orthogonal direction. The vectors shown are the [[Eigenvalues and eigenvectors|eigenvectors]] of the [[covariance matrix]] scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean.]]
{{Machine learning bar}}
'''Principal component analysis''' ('''PCA''') is a [[Linear map|linear]] [[dimensionality reduction]] technique with applications in [[exploratory data analysis]], visualization and [[Data Preprocessing|data preprocessing]].
 
The data is [[linear map|linearly transformed]] onto a new [[coordinate system]] such that the directions (principal components) capturing the largest variation in the data can be easily identified.
Line 14:
The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The <math>i</math>-th principal component can be taken as a direction orthogonal to the first <math>i-1</math> principal components that maximizes the variance of the projected data.
 
For either objective, it can be shown that the principal components are [[eigenvectors]] of the data's [[covariance matrix]]. Thus, the principal components are often computed by [[Eigendecomposition of a matrix|eigendecomposition]] of the data covariance matrix or [[singular value decomposition]] of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to [[factor analysis]]. Factor analysis typically incorporates more ___domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to [[Canonical correlation|canonical correlation analysis (CCA)]]. CCA defines coordinate systems that optimally describe the [[cross-covariance]] between two datasets while PCA defines a new [[orthogonal coordinate system]] that optimally describes variance in a single dataset.<ref>{{Cite journal|author1=Barnett, T. P. |author2=R. Preisendorfer. |name-list-style=amp |title=Origins and levels of monthly and seasonal forecast skill for United States surface air temperatures determined by canonical correlation analysis |journal=Monthly Weather Review |volume=115 |issue=9 |pages=1825 |year=1987 |doi=10.1175/1520-0493(1987)115<1825:oaloma>2.0.co;2|bibcode=1987MWRv..115.1825B|doi-access=free }}</ref><ref>{{Cite book |last1=Hsu|first1=Daniel |first2=Sham M.|last2=Kakade |first3=Tong|last3=Zhang |title=A spectral algorithm for learning hidden markov models |arxiv=0811.4413 |year=2008 |bibcode=2008arXiv0811.4413H}}</ref><ref name="mark2017">{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Kundu|first2=Sandipan|last3=Chamadia|first3=Shubham |last4=Pados|first4=Dimitris A.|title=Efficient L1-Norm Principal-Component Analysis via Bit Flipping|journal=IEEE Transactions on Signal Processing|date=15 August 2017|volume=65|issue=16|pages=4252–4264|doi=10.1109/TSP.2017.2708023|arxiv=1610.01959|bibcode=2017ITSP...65.4252M|s2cid=7931130}}</ref><ref name="l1tucker">{{cite journal|last1=Chachlakis|first1=Dimitris G.|last2=Prater-Bennette|first2=Ashley|last3=Markopoulos|first3=Panos P.|title=L1-norm Tucker Tensor Decomposition|journal=IEEE Access|date=22 November 2019|volume=7|pages=178454–178465|doi=10.1109/ACCESS.2019.2955134|arxiv=1904.06455|doi-access=free|bibcode=2019IEEEA...7q8454C }}</ref> [[Robust statisticsprincipal component analysis|Robust]] and [[Lp space|L1-norm]]-based variants of standard PCA have also been proposed.<ref name="mark2014">{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Karystinos|first2=George N.|last3=Pados|first3=Dimitris A.|title=Optimal Algorithms for L1-subspace Signal Processing|journal=IEEE Transactions on Signal Processing|date=October 2014|volume=62|issue=19|pages=5046–5058|doi=10.1109/TSP.2014.2338077|arxiv=1405.6785|bibcode=2014ITSP...62.5046M|s2cid=1494171}}</ref><ref>{{cite journal |last1=Zhan |first1=J. |last2=Vaswani |first2=N. |date=2015 |title=Robust PCA With Partial Subspace Knowledge |url=https://doi.org/10.1109/tsp.2015.2421485 |journal=IEEE Transactions on Signal Processing |volume=63 |issue=13 |pages=3332–3347 | doi=10.1109/tsp.2015.2421485|arxiv=1403.1591 |bibcode=2015ITSP...63.3332Z |s2cid=1516440 }}</ref><ref>{{cite book|last1=Kanade|first1=T.|last2=Ke|first2=Qifa |title=2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) |chapter=Robust L₁ Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming |volume=1|pages=739–746|date=June 2005|doi=10.1109/CVPR.2005.309|publisher=IEEE|isbn=978-0-7695-2372-9|citeseerx=10.1.1.63.4605|s2cid=17144854}}</ref><ref name="l1tucker" />
 
== History ==
Line 111:
 
=== Dimensionality reduction ===
The transformation '''TP''' = '''X''' '''W''' maps a data vector '''x'''<sub>(''i'')</sub> from an original space of ''px'' variables to a new space of ''p'' variables which are uncorrelated over the dataset. However, not all the principal components need to be kept. Keeping only the first ''L'' principal components, produced by using only the first ''L'' eigenvectors, gives the truncated transformation
To non-dimensionalize the centered data, let ''X<sub>c</sub>'' represent the characteristic values of data vectors ''X<sub>i</sub>'', given by:
 
:* <math>\mathbf{T}_L = |X\mathbf|_{X} \mathbf{Winfty}_L</math> (maximum norm),
* <math>\frac{1}{n} \|X\|_1</math> (mean absolute value), or
 
* <math>\frac{1}{\sqrt{n}} \|X\|_2</math> (normalized Euclidean norm),
where the matrix '''T'''<sub>L</sub> now has ''n'' rows but only ''L'' columns. In other words, PCA learns a linear transformation <math> t = W_L^\mathsf{T} x, x \in \mathbb{R}^p, t \in \mathbb{R}^L,</math> where the columns of {{math|''p'' × ''L''}} matrix <math>W_L</math> form an orthogonal basis for the ''L'' features (the components of representation ''t'') that are decorrelated.<ref>{{Cite journal |author=Bengio, Y.|year=2013|title=Representation Learning: A Review and New Perspectives |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=35 |issue=8 |pages=1798–1828 |doi=10.1109/TPAMI.2013.50|pmid=23787338|display-authors=etal|arxiv=1206.5538|s2cid=393948}}</ref> By construction, of all the transformed data matrices with only ''L'' columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error <math>\|\mathbf{T}\mathbf{W}^T - \mathbf{T}_L\mathbf{W}^T_L\|_2^2</math> or <math>\|\mathbf{X} - \mathbf{X}_L\|_2^2</math>.
for a dataset of size ''n''. These norms are used to transform the original space of variables ''x, y'' to a new space of uncorrelated variables ''p, q'' (given ''Y<sub>c</sub>'' with same meaning), such that <math>p_i = \frac{X_i}{X_c}, \quad q_i = \frac{Y_i}{Y_c}</math>;
and the new variables are linearly related as: <math>q = \alpha p</math>.
To find the optimal linear relationship, we minimize the total squared reconstruction error:
<math>E(\alpha) = \frac{1}{1 - \alpha^2} \sum_{i=1}^{n} (\alpha p_i - q_i)^2</math>; such that setting the derivative of the error function to zero <math>(E'(\alpha) = 0)</math> yields:<math>\alpha = \frac{1}{2} \left( -\lambda \pm \sqrt{\lambda^2 + 4} \right)</math> where<math>\lambda = \frac{p \cdot p - q \cdot q}{p \cdot q}</math>.<ref name="Holmes2023" />
 
[[File:PCA of Haplogroup J using 37 STRs.png|thumb|right|A principal components analysis scatterplot of [[Y-STR]] [[haplotype]]s calculated from repeat-count values for 37 Y-chromosomal STR markers from 354 individuals.<br /> PCA has successfully found linear combinations of the markers that separate out different clusters corresponding to different lines of individuals' Y-chromosomal genetic descent.]]
Line 154 ⟶ 158:
:<math>\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L </math>
The truncation of a matrix '''M''' or '''T''' using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of [[Rank (linear algebra)|rank]] ''L'' to the original matrix, in the sense of the difference between the two having the smallest possible [[Frobenius norm]], a result known as the [[Low-rank approximation#Proof of Eckart–Young–Mirsky theorem (for Frobenius norm)|Eckart–Young theorem]] [1936].
 
<blockquote>
'''Theorem (Optimal k‑dimensional fit).'''
Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and let
<math>P = U \,\Sigma\, V^{T}</math>
be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense is
<math>P_{k} = U_{k}\,\Sigma_{k}\,V_{k}^{T}</math>,
where V<sub>k</sub> consists of the first k columns of V. Moreover, the relative residual variance is
<math>R(k)=\frac{\sum_{j=k+1}^{m}\sigma_{j}^{2}}{\sum_{j=1}^{m}\sigma_{j}^{2}}</math>.
</blockquote><ref name="Holmes2023" />
 
== Further considerations ==
Line 159 ⟶ 173:
The singular values (in '''Σ''') are the square roots of the [[eigenvalue]]s of the matrix '''X'''<sup>T</sup>'''X'''. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA.
 
PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.<ref name="Holmes2023">
PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same [[sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
{{cite book
|last=Holmes
|first=Mark H.
|title=Introduction to Scientific Computing and Data Analysis
|series=Texts in Computational Science and Engineering
|edition=2nd
|year=2023
|publisher=Springer
|isbn=978-3-031-22429-4
|pages=475–490
}}
</ref>
 
Let <math>\mathbf X_\text{c}</math> be the *centered* data matrix (''n'' rows, ''p'' columns) and define the covariance
<math>\Sigma = \frac{1}{n}\,\mathbf X_\text{c}^{\mathsf T}\mathbf X_\text{c}.</math>
If the <math>j</math>‑th variable is multiplied by a factor <math>\alpha_j</math> we obtain
<math>\mathbf X_\text{c}^{(\alpha)} = \mathbf X_\text{c}D,\qquad
D = \operatorname{diag}(\alpha_1,\ldots,\alpha_p).</math>
Hence the new covariance is
<math>\Sigma^{(\alpha)} = D^{\mathsf T}\,\Sigma\,D.</math>
 
Because the eigenvalues and eigenvectors of <math>\Sigma^{(\alpha)}</math> are those of <math>\Sigma</math> scaled by <math>D</math>, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates.
 
PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same [[sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
 
Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.<ref name="Holmes2023" />
 
Write each observation as
<math>\mathbf q_i = \boldsymbol\mu + \mathbf z_i,\qquad
\boldsymbol\mu = \tfrac{1}{n}\sum_{i=1}^{n}\mathbf q_i.</math>
 
Without subtracting <math>\boldsymbol\mu</math> we are in effect diagonalising
 
<math>\Sigma_{\text{unc}} \;=\; n\,\boldsymbol\mu\boldsymbol\mu^{\mathsf T}
\;+\;\tfrac{1}{n}\,\mathbf Z^{\mathsf T}\mathbf Z,</math>
 
where <math>\mathbf Z</math> is the centered matrix.
The rank‑one term <math>n\,\boldsymbol\mu\boldsymbol\mu^{\mathsf T}</math> often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred part <math>\mathbf Z</math>.
After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance.
 
Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the [[Minimum mean square error|mean square error]] of the approximation of the data.<ref>A. A. Miranda, Y. A. Le Borgne, and G. Bontempi. [http://www.ulb.ac.be/di/map/yleborgn/pub/NPL_PCA_07.pdf New Routes from Minimal Approximation Error to Principal Components], Volume 27, Number 3 / June, 2008, Neural Processing Letters, Springer</ref>
 
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: ''Pearson Product-Moment Correlation''). Also see the article by Kromrey & Foster-Johnson (1998) on ''"Mean-centering in Moderated Regression: Much Ado About Nothing"''. Since [[Covariance matrix#Relation to the correlation matrix|covariances are correlations of normalized variables]] ([[Standard score#Calculation|Z- or standard-scores]]) a PCA based on the correlation matrix of '''X''' is [[Equality (mathematics)|equal]] to a PCA based on the covariance matrix of '''Z''', the standardized version of '''X'''.
 
PCA is a popular primary technique in [[pattern recognition]]. It is not, however, optimized for class separability.<ref>{{Cite book| last=Fukunaga|first=Keinosuke|author-link=Keinosuke Fukunaga | title = Introduction to Statistical Pattern Recognition |publisher=Elsevier | year = 1990 | url=https://dl.acm.org/doi/book/10.5555/92131| isbn=978-0-12-269851-4}}</ref> However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.<ref>{{cite journal|last1=Alizadeh|first1=Elaheh|last2=Lyons|first2=Samanthe M|last3=Castle|first3=Jordan M|last4=Prasad|first4=Ashok|title=Measuring systematic changes in invasive cancer cell shape using Zernike moments|journal=Integrative Biology|date=2016|volume=8|issue=11|pages=1183–1193|doi=10.1039/C6IB00100A|pmid=27735002|url=https://pubs.rsc.org/en/Content/ArticleLanding/2016/IB/C6IB00100A|url-access=subscription}}</ref> The [[linear discriminant analysis]] is an alternative which is optimized for class separability.
 
== Table of symbols and abbreviations ==
Line 282 ⟶ 334:
The applicability of PCA as described above is limited by certain (tacit) assumptions<ref>Jonathon Shlens, [https://arxiv.org/abs/1404.1100 A Tutorial on Principal Component Analysis.]</ref> made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (see [[Kernel principal component analysis|kernel PCA]]).
 
Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,<ref name="soummer12"/> and forward modeling has to be performed to recover the true magnitude of the signals.<ref name="pueyo16">{{Cite journal|arxiv= 1604.06097 |last1= Pueyo|first1= Laurent |title= Detection and Characterization of Exoplanets using Projections on Karhunen Loeve Eigenimages: Forward Modeling |journal= The Astrophysical Journal |volume= 824|issue= 2|pages= 117|year= 2016|doi= 10.3847/0004-637X/824/2/117|bibcode = 2016ApJ...824..117P|s2cid= 118349503|doi-access= free}}</ref> As an alternative method, [[non-negative matrix factorization]] focusing only on the non-negative elements in the matrices is well-suited for astrophysical observations.<ref name="blantonRoweis07"/><ref name="zhu16"/><ref name="ren18"/> See more at the [[#Non-negative matrix factorization|the relation between PCA and non-negative matrix factorization]].
 
PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms the original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.<ref>{{cite web | title=What are the Pros and cons of the PCA? | website=i2tutorials | date=September 1, 2019 | url=https://www.i2tutorials.com/what-are-the-pros-and-cons-of-the-pca/ | access-date=June 4, 2021}}</ref>
Line 358 ⟶ 410:
<li>
'''Compute the cumulative energy content for each eigenvector'''
* The eigenvalues represent the distribution of the source data's energy{{Clarify|date=March 2011}} among each of the eigenvectors, where the eigenvectors form a [[basis (linear algebra)|basis]] for the data. The cumulative energy content ''g'' for the ''j''th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through ''j'' divided by the sum of energy content across all eigenvalues (shown in step 8):{{Citation needed|date=March 2011}} <math display="block">g_j = \sum_{k=1}^j D_{kk} \qquad \text{for } j = 1,\dots,p </math>
 
* The eigenvalues represent the distribution of the source data's energy{{Clarify|date=March 2011}} among each of the eigenvectors, where the eigenvectors form a [[basis (linear algebra)|basis]] for the data. The cumulative energy content ''g'' for the ''j''th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through ''j'':{{Citation needed|date=March 2011}} <math display="block">g_j = \sum_{k=1}^j D_{kk} \qquad \text{for } j = 1,\dots,p </math>
</li>
<li>