Content deleted Content added
m and its back |
m removed typo |
||
(14 intermediate revisions by 8 users not shown) | |||
Line 1:
{{Short description|Method of data analysis}}
[[File:GaussianScatterPCA.svg|thumb|upright=1.3|PCA of a [[multivariate Gaussian distribution]] centered at (1, 3) with a standard deviation of 3 in roughly the (0.866, 0.5) direction and of 1 in the orthogonal direction. The vectors shown are the [[Eigenvalues and eigenvectors|eigenvectors]] of the [[covariance matrix]] scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean.]]
{{Machine learning bar}}
'''Principal component analysis''' ('''PCA''') is a [[Linear map|linear]] [[dimensionality reduction]] technique with applications in [[exploratory data analysis]], visualization and [[
The data is [[linear map|linearly transformed]] onto a new [[coordinate system]] such that the directions (principal components) capturing the largest variation in the data can be easily identified.
Line 111:
=== Dimensionality reduction ===
The transformation '''
To non-dimensionalize the centered data, let ''X<sub>c</sub>'' represent the characteristic values of data vectors ''X<sub>i</sub>'', given by:
* <math>\frac{1}{n} \|X\|_1</math> (mean absolute value), or
* <math>\frac{1}{\sqrt{n}} \|X\|_2</math> (normalized Euclidean norm),
for a dataset of size ''n''. These norms are used to transform the original space of variables ''x, y'' to a new space of uncorrelated variables ''p, q'' (given ''Y<sub>c</sub>'' with same meaning), such that <math>p_i = \frac{X_i}{X_c}, \quad q_i = \frac{Y_i}{Y_c}</math>;
and the new variables are linearly related as: <math>q = \alpha p</math>.
To find the optimal linear relationship, we minimize the total squared reconstruction error:
<math>E(\alpha) = \frac{1}{1 - \alpha^2} \sum_{i=1}^{n} (\alpha p_i - q_i)^2</math>; such that setting the derivative of the error function to zero <math>(E'(\alpha) = 0)</math> yields:<math>\alpha = \frac{1}{2} \left( -\lambda \pm \sqrt{\lambda^2 + 4} \right)</math> where<math>\lambda = \frac{p \cdot p - q \cdot q}{p \cdot q}</math>.<ref name="Holmes2023" />
[[File:PCA of Haplogroup J using 37 STRs.png|thumb|right|A principal components analysis scatterplot of [[Y-STR]] [[haplotype]]s calculated from repeat-count values for 37 Y-chromosomal STR markers from 354 individuals.<br /> PCA has successfully found linear combinations of the markers that separate out different clusters corresponding to different lines of individuals' Y-chromosomal genetic descent.]]
Line 154 ⟶ 158:
:<math>\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L </math>
The truncation of a matrix '''M''' or '''T''' using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of [[Rank (linear algebra)|rank]] ''L'' to the original matrix, in the sense of the difference between the two having the smallest possible [[Frobenius norm]], a result known as the [[Low-rank approximation#Proof of Eckart–Young–Mirsky theorem (for Frobenius norm)|Eckart–Young theorem]] [1936].
<blockquote>
Line 164 ⟶ 167:
where V<sub>k</sub> consists of the first k columns of V. Moreover, the relative residual variance is
<math>R(k)=\frac{\sum_{j=k+1}^{m}\sigma_{j}^{2}}{\sum_{j=1}^{m}\sigma_{j}^{2}}</math>.
</blockquote><ref name="Holmes2023" />
== Further considerations ==▼
<ref>{{cite book▼
| last = Holmes▼
| first = M.▼
| title = Introduction to Scientific Computing and Data Analysis▼
| edition = 2nd▼
| year = 2023▼
| publisher = Springer▼
| isbn = 978-3-031-22429-4▼
}}</ref>▼
The singular values (in '''Σ''') are the square roots of the [[eigenvalue]]s of the matrix '''X'''<sup>T</sup>'''X'''. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA.▼
PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.<ref name="Holmes2023">
|series=Texts in Computational Science and Engineering
|pages=475–490
}}
Let <math>\mathbf X_\text{c}</math> be the *centered* data matrix (''n'' rows, ''p'' columns) and define the covariance
▲== Further considerations ==
<math>\Sigma = \frac{1}{n}\,\mathbf X_\text{c}^{\mathsf T}\mathbf X_\text{c}.</math>
If the <math>j</math>‑th variable is multiplied by a factor <math>\alpha_j</math> we obtain
<math>\mathbf X_\text{c}^{(\alpha)} = \mathbf X_\text{c}D,\qquad
D = \operatorname{diag}(\alpha_1,\ldots,\alpha_p).</math>
Hence the new covariance is
<math>\Sigma^{(\alpha)} = D^{\mathsf T}\,\Sigma\,D.</math>
Because the eigenvalues and eigenvectors of <math>\Sigma^{(\alpha)}</math> are those of <math>\Sigma</math> scaled by <math>D</math>, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates.
▲The singular values (in '''Σ''') are the square roots of the [[eigenvalue]]s of the matrix '''X'''<sup>T</sup>'''X'''. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA.
Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.<ref name="Holmes2023" />
Write each observation as
<math>\mathbf q_i = \boldsymbol\mu + \mathbf z_i,\qquad
\boldsymbol\mu = \tfrac{1}{n}\sum_{i=1}^{n}\mathbf q_i.</math>
Without subtracting <math>\boldsymbol\mu</math> we are in effect diagonalising
<math>\Sigma_{\text{unc}} \;=\; n\,\boldsymbol\mu\boldsymbol\mu^{\mathsf T}
\;+\;\tfrac{1}{n}\,\mathbf Z^{\mathsf T}\mathbf Z,</math>
where <math>\mathbf Z</math> is the centered matrix.
▲PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same [[sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
The rank‑one term <math>n\,\boldsymbol\mu\boldsymbol\mu^{\mathsf T}</math> often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred part <math>\mathbf Z</math>.
After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance.
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: ''Pearson Product-Moment Correlation''). Also see the article by Kromrey & Foster-Johnson (1998) on ''"Mean-centering in Moderated Regression: Much Ado About Nothing"''. Since [[Covariance matrix#Relation to the correlation matrix|covariances are correlations of normalized variables]] ([[Standard score#Calculation|Z- or standard-scores]]) a PCA based on the correlation matrix of '''X''' is [[Equality (mathematics)|equal]] to a PCA based on the covariance matrix of '''Z''', the standardized version of '''X'''.
PCA is a popular primary technique in [[pattern recognition]]. It is not, however, optimized for class separability.<ref>{{Cite book| last=Fukunaga|first=Keinosuke|author-link=Keinosuke Fukunaga | title = Introduction to Statistical Pattern Recognition |publisher=Elsevier | year = 1990 | url=https://dl.acm.org/doi/book/10.5555/92131| isbn=978-0-12-269851-4}}</ref> However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.<ref>{{cite journal|last1=Alizadeh|first1=Elaheh|last2=Lyons|first2=Samanthe M|last3=Castle|first3=Jordan M|last4=Prasad|first4=Ashok|title=Measuring systematic changes in invasive cancer cell shape using Zernike moments|journal=Integrative Biology|date=2016|volume=8|issue=11|pages=1183–1193|doi=10.1039/C6IB00100A|pmid=27735002|url=https://pubs.rsc.org/en/Content/ArticleLanding/2016/IB/C6IB00100A|url-access=subscription}}</ref> The [[linear discriminant analysis]] is an alternative which is optimized for class separability.
== Table of symbols and abbreviations ==
Line 381 ⟶ 410:
<li>
'''Compute the cumulative energy content for each eigenvector'''
* The eigenvalues represent the distribution of the source data's energy{{Clarify|date=March 2011}} among each of the eigenvectors, where the eigenvectors form a [[basis (linear algebra)|basis]] for the data. The cumulative energy content ''g'' for the ''j''th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through ''j'' divided by the sum of energy content across all eigenvalues (shown in step 8):{{Citation needed|date=March 2011}} <math display="block">g_j = \sum_{k=1}^j D_{kk} \qquad \text{for } j = 1,\dots,p </math>▼
▲* The eigenvalues represent the distribution of the source data's energy{{Clarify|date=March 2011}} among each of the eigenvectors, where the eigenvectors form a [[basis (linear algebra)|basis]] for the data. The cumulative energy content ''g'' for the ''j''th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through ''j'':{{Citation needed|date=March 2011}} <math display="block">g_j = \sum_{k=1}^j D_{kk} \qquad \text{for } j = 1,\dots,p </math>
</li>
<li>
|