Principal component analysis: Difference between revisions

Content deleted Content added
ce
m removed typo
 
(4 intermediate revisions by 4 users not shown)
Line 217:
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: ''Pearson Product-Moment Correlation''). Also see the article by Kromrey & Foster-Johnson (1998) on ''"Mean-centering in Moderated Regression: Much Ado About Nothing"''. Since [[Covariance matrix#Relation to the correlation matrix|covariances are correlations of normalized variables]] ([[Standard score#Calculation|Z- or standard-scores]]) a PCA based on the correlation matrix of '''X''' is [[Equality (mathematics)|equal]] to a PCA based on the covariance matrix of '''Z''', the standardized version of '''X'''.
 
PCA is a popular primary technique in [[pattern recognition]]. It is not, however, optimized for class separability.<ref>{{Cite book| last=Fukunaga|first=Keinosuke|author-link=Keinosuke Fukunaga | title = Introduction to Statistical Pattern Recognition |publisher=Elsevier | year = 1990 | url=https://dl.acm.org/doi/book/10.5555/92131| isbn=978-0-12-269851-4}}</ref> However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.<ref>{{cite journal|last1=Alizadeh|first1=Elaheh|last2=Lyons|first2=Samanthe M|last3=Castle|first3=Jordan M|last4=Prasad|first4=Ashok|title=Measuring systematic changes in invasive cancer cell shape using Zernike moments|journal=Integrative Biology|date=2016|volume=8|issue=11|pages=1183–1193|doi=10.1039/C6IB00100A|pmid=27735002|url=https://pubs.rsc.org/en/Content/ArticleLanding/2016/IB/C6IB00100A|url-access=subscription}}</ref> The [[linear discriminant analysis]] is an alternative which is optimized for class separability.
 
== Table of symbols and abbreviations ==
Line 410:
<li>
'''Compute the cumulative energy content for each eigenvector'''
* The eigenvalues represent the distribution of the source data's energy{{Clarify|date=March 2011}} among each of the eigenvectors, where the eigenvectors form a [[basis (linear algebra)|basis]] for the data. The cumulative energy content ''g'' for the ''j''th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through ''j'' divided by the sum of energy content across all eigenvalues (shown in step 8):{{Citation needed|date=March 2011}} <math display="block">g_j = \sum_{k=1}^j D_{kk} \qquad \text{for } j = 1,\dots,p </math>
 
* The eigenvalues represent the distribution of the source data's energy{{Clarify|date=March 2011}} among each of the eigenvectors, where the eigenvectors form a [[basis (linear algebra)|basis]] for the data. The cumulative energy content ''g'' for the ''j''th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through ''j'':{{Citation needed|date=March 2011}} <math display="block">g_j = \sum_{k=1}^j D_{kk} \qquad \text{for } j = 1,\dots,p </math>
</li>
<li>