Principal component analysis: Difference between revisions

Content deleted Content added
Aug zxj (talk | contribs)
Aug zxj (talk | contribs)
Line 182:
The singular values (in '''Σ''') are the square roots of the [[eigenvalue]]s of the matrix '''X'''<sup>T</sup>'''X'''. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA.
 
PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.<ref name="Holmes2023" />
{{cite book
|last=Holmes
|first=Mark H.
|title=Introduction to Scientific Computing and Data Analysis
|series=Texts in Computational Science and Engineering
|edition=2nd
|year=2023
|publisher=Springer
|isbn=978-3-031-22429-4
|pages=475–490
}}
</ref>
 
Let <math>\mathbf X_\text{c}</math> be the *centered* data matrix (''n'' rows, ''p'' columns) and define the covariance
Line 195 ⟶ 207:
 
If we have just two variables and they have the same [[sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.