Exploratory factor analysis: Difference between revisions

Content deleted Content added
WikiCleanerBot (talk | contribs)
m v2.03b - Bot T20 CW#61 - WP:WCW project (Reference before punctuation - Title linked in text)
Monkbot (talk | contribs)
m Task 18 (cosmetic): eval 30 templates: del empty params (18×); hyphenate params (6×);
Line 1:
In [[multivariate statistics]], '''exploratory factor analysis''' ('''EFA''') is a statistical method used to uncover the underlying structure of a relatively large set of [[Variable (research)|variables]]. EFA is a technique within [[factor analysis]] whose overarching goal is to identify the underlying relationships between measured variables.<ref name=Norris>{{cite journal|last=Norris|first=Megan|author2=Lecavalier, Luc|title=Evaluating the Use of Exploratory Factor Analysis in Developmental Disability Psychological Research|journal=Journal of Autism and Developmental Disorders|date=17 July 2009|volume=40|issue=1|pages=8–20|doi=10.1007/s10803-009-0816-2|pmid=19609833}}</ref> It is commonly used by researchers when developing a scale (a ''scale'' is a collection of questions used to measure a particular research topic) and serves to identify a set of [[Latent variable|latent constructs]] underlying a battery of measured variables.<ref name=Fabrigar>{{cite journal|last=Fabrigar|first=Leandre R.|author2=Wegener, Duane T. |author3=MacCallum, Robert C. |author4=Strahan, Erin J. |title=Evaluating the use of exploratory factor analysis in psychological research.|journal=Psychological Methods|date=1 January 1999|volume=4|issue=3|pages=272–299|doi=10.1037/1082-989X.4.3.272|url=http://www.statpower.net/Content/312/Handout/Fabrigar1999.pdf}}</ref> It should be used when the researcher has no ''a priori'' hypothesis about factors or patterns of measured variables.<ref name=Finch>{{cite journal | last1 = Finch | first1 = J. F. | last2 = West | first2 = S. G. | year = 1997 | title = The investigation of personality structure: Statistical models | url = | journal = Journal of Research in Personality | volume = 31 | issue = 4| pages = 439–485 | doi=10.1006/jrpe.1997.2194}}</ref> ''Measured variables'' are any one of several attributes of people that may be observed and measured. Examples of measured variables could be the physical height, weight, and pulse rate of a human being. Usually, researchers would have a large number of measured variables, which are assumed to be related to a smaller number of "unobserved" factors. Researchers must carefully consider the number of measured variables to include in the analysis.<ref name =Fabrigar/> EFA procedures are more accurate when each factor is represented by multiple measured variables in the analysis.
 
EFA is based on the common factor model.<ref name =Norris/> In this model, manifest variables are expressed as a function of common factors, unique factors, and errors of measurement. Each unique factor influences only one manifest variable, and does not explain correlations between manifest variables. Common factors influence more than one manifest variable and "factor loadings" are measures of the influence of a common factor on a manifest variable.<ref name =Norris/> For the EFA procedure, we are more interested in identifying the common factors and the related manifest variables.
Line 10:
 
===Maximum likelihood (ML)===
The maximum likelihood method has many advantages in that it allows researchers to compute of a wide range of indexes of the [[goodness of fit]] of the model, it allows researchers to test the [[statistical significance]] of factor loadings, calculate correlations among factors and compute [[confidence interval]]s for these parameters.<ref>{{cite journal | last1 = Cudeck | first1 = R. | last2 = O'Dell | first2 = L. L. | year = 1994 | title = Applications of standard error estimates in unrestricted factor analysis: Significance tests for factor loadings and correlations | url = | journal = Psychological Bulletin | volume = 115 | issue = 3| pages = 475–487 | doi = 10.1037/0033-2909.115.3.475 }}</ref> ML is the best choice when data are normally distributed because “it allows for the computation of a wide range of indexes of the goodness of fit of the model [and] permits statistical significance testing of factor loadings and correlations among factors and the computation of confidence intervals”.<ref name =Fabrigar/>
 
===Principal axis factoring (PAF)===
Line 26:
''Underfactoring'' occurs when too few factors are included in a model. If not enough factors are included in a model, there is likely to be substantial error. Measured variables that load onto a factor not included in the model can falsely load on factors that are included, altering true factor loadings. This can result in rotated solutions in which two factors are combined into a single factor, obscuring the true factor structure.
 
There are a number of procedures designed to determine the optimal number of factors to retain in EFA. These include Kaiser's (1960) eigenvalue-greater-than-one rule (or K1 rule),<ref>{{cite journal|last=Kaiser|first=H.F.|title=The application of electronic computers to factor analysis|journal=Educational and Psychological Measurement|year=1960|volume=20|pages=141–151|doi=10.1177/001316446002000116}}</ref> Cattell's (1966) [[scree plot]],<ref name="Cattell, R. B. 1966">Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, I, 245-276.</ref> Revelle and Rocklin's (1979) very simple structure criterion,<ref>{{cite journal | last1 = Revelle | first1 = W. | last2 = Rocklin | first2 = T. | year = 1979 | title = Very simple structure-alternative procedure for estimating the optimal number of interpretable factors | url = | journal = Multivariate Behavioral Research | volume = 14 | issue = 4| pages = 403–414 | doi = 10.1207/s15327906mbr1404_2 | pmid = 26804437 }}</ref> model comparison techniques,<ref>{{cite journal | last1 = Fabrigar | first1 = Leandre R. | last2 = Wegener | first2 = Duane T. | last3 = MacCallum | first3 = Robert C. | last4 = Strahan | first4 = Erin J. | year = 1999 | title = Evaluating the use of exploratory factor analysis in psychological research. | url = | journal = Psychological Methods | volume = 4 | issue = 3| pages = 272–299 | doi = 10.1037/1082-989X.4.3.272 }}</ref> Raiche, Roipel, and Blais's (2006) acceleration factor and optimal coordinates,<ref>Raiche, G., Roipel, M., & Blais, J. G.|Non graphical solutions for the Cattell’s scree test. Paper presented at The International Annual Meeting of the Psychometric Society, Montreal|date=2006|Retrieved December 10, 2012 from {{cite web |url=https://ppw.kuleuven.be/okp/_pdf/Raiche2013NGSFC.pdf |title=Archived copy |accessdateaccess-date=2013-05-03 |url-status=live |archiveurlarchive-url=https://web.archive.org/web/20131021052759/https://ppw.kuleuven.be/okp/_pdf/Raiche2013NGSFC.pdf |archivedatearchive-date=2013-10-21 }}</ref> Velicer's (1976) minimum average partial,<ref name=Velicer>{{cite journal|last=Velicer|first=W.F.|title=Determining the number of components from the matrix of partial correlations|journal=Psychometrika|year=1976|volume=41|issue=3|pages=321–327|doi=10.1007/bf02293557}}</ref> Horn's (1965) [[parallel analysis]], and Ruscio and Roche's (2012) comparison data.<ref name =Ruscio>{{cite journal|last=Ruscio|first=J.|author2=Roche, B.|title=Determining the number of factors to retain in an exploratory factor analysis using comparison data of a known factorial structure|journal=Psychological Assessment|year=2012|volume=24|issue=2|pages=282–292|doi=10.1037/a0025697|pmid=21966933}}</ref> Recent simulation studies assessing the robustness of such techniques suggest that the latter five can better assist practitioners to judiciously model data.<ref name =Ruscio/> These five modern techniques are now easily accessible through integrated use of IBM SPSS Statistics software (SPSS) and R (R Development Core Team, 2011). See Courtney (2013)<ref name="pareonline.net">Courtney, M. G. R. (2013). Determining the number of factors to retain in EFA: Using the SPSS R-Menu v2.0 to make more judicious estimations. ''Practical Assessment, Research and Evaluation'', 18(8). Available online:
{{cite web |url=http://pareonline.net/getvn.asp?v=18&n=8 |title=Archived copy |accessdateaccess-date=2014-06-08 |url-status=live |archiveurlarchive-url=https://web.archive.org/web/20150317145450/http://pareonline.net/getvn.asp?v=18&n=8 |archivedatearchive-date=2015-03-17 }}</ref> for guidance on how to carry out these procedures for continuous, ordinal, and heterogenous (continuous and ordinal) data.
 
With the exception of Revelle and Rocklin's (1979) very simple structure criterion, model comparison techniques, and Velicer's (1976) minimum average partial, all other procedures rely on the analysis of eigenvalues. The ''eigenvalue'' of a factor represents the amount of variance of the variables accounted for by that factor. The lower the eigenvalue, the less that factor contributes to explaining the variance of the variables.<ref name =Norris/>
Line 34:
 
===Kaiser's (1960) eigenvalue-greater-than-one rule (K1 or Kaiser criterion)===
Compute the eigenvalues for the correlation matrix and determine how many of these eigenvalues are greater than 1. This number is the number of factors to include in the model. A disadvantage of this procedure is that it is quite arbitrary (e.g., an eigenvalue of 1.01 is included whereas an eigenvalue of .99 is not). This procedure often leads to overfactoring and sometimes underfactoring. Therefore, this procedure should not be used.<ref name =Fabrigar /> A variation of the K1 criterion has been created to lessen the severity of the criterion's problems where a researcher calculates [[confidence interval]]s for each eigenvalue and retains only factors which have the entire confidence interval greater than 1.0.<ref>{{cite journal | last1 = Larsen | first1 = R. | last2 = Warne | first2 = R. T. | year = 2010 | title = Estimating confidence intervals for eigenvalues in exploratory factor analysis | url = | journal = Behavior Research Methods | volume = 42 | issue = 3| pages = 871–876 | doi = 10.3758/BRM.42.3.871 | pmid = 20805609 | doi-access = free }}</ref><ref>{{cite journal | last1 = Warne | first1 = R. T. | last2 = Larsen | first2 = R. | year = 2014 | title = Evaluating a proposed modification of the Guttman rule for determining the number of factors in an exploratory factor analysis | url = | journal = Psychological Test and Assessment Modeling | volume = 56 | issue = | pages = 104–123 }}</ref>
 
===Cattell's (1966) scree plot===
{{Main|Scree plot}}
Compute the eigenvalues for the correlation matrix and plot the values from largest to smallest. Examine the graph to determine the last substantial drop in the magnitude of eigenvalues. The number of plotted points before the last drop is the number of factors to include in the model.<ref name="Cattell, R. B. 1966"/> This method has been criticized because of its subjective nature (i.e., there is no clear objective definition of what constitutes a substantial drop).<ref>{{cite journal | last1 = Kaiser | first1 = H. F. | year = 1970 | title = A second generation little jiffy | url = | journal = Psychometrika | volume = 35 | issue = | pages = 401–415 | doi = 10.1007/bf02291817 }}</ref> As this procedure is subjective, Courtney (2013) does not recommend it.<ref name="pareonline.net"/>
 
===Revelle and Rocklin (1979) very simple structure===
Line 48:
There are different methods that can be used to assess model fit:<ref name =Fabrigar/>
 
*'''Likelihood ratio statistic:'''<ref>Lawley, D. N. (1940). The estimation of factor loadings by the method of maximumlikelihood. Proceedings of the Royal Society ofedinborough, 60A, 64-82.</ref> Used to test the null hypothesis that a model has perfect model fit. It should be applied to models with an increasing number of factors until the result is nonsignificant, indicating that the model is not rejected as good model fit of the population. This statistic should be used with a large sample size and normally distributed data. There are some drawbacks to the likelihood ratio test. First, when there is a large sample size, even small discrepancies between the model and the data result in model rejection.<ref name =Humphreys/><ref>{{cite journal | last1 = Hakstian | first1 = A. R. | last2 = Rogers | first2 = W. T. | last3 = Cattell | first3 = R. B. | year = 1982 | title = The behavior of number-offactors rules with simulated data | url = | journal = Multivariate Behavioral Research | volume = 17 | issue = 2| pages = 193–219 | doi = 10.1207/s15327906mbr1702_3 }}</ref><ref>{{cite journal|last=Harris|first=M. L.|author2=Harris, C. W.|title=A Factor Analytic Interpretation Strategy|journal=Educational and Psychological Measurement|date=1 October 1971|volume=31|issue=3|pages=589–606|doi=10.1177/001316447103100301}}</ref> When there is a small sample size, even large discrepancies between the model and data may not be significant, which leads to underfactoring.<ref name =Humphreys/> Another disadvantage of the likelihood ratio test is that the null hypothesis of perfect fit is an unrealistic standard.<ref name=Maccallum>{{cite journal | last1 = Maccallum | first1 = R. C. | year = 1990 | title = The need for alternative measures of fit in covariance structure modeling | url = | journal = Multivariate Behavioral Research | volume = 25 | issue = 2| pages = 157–162 | doi=10.1207/s15327906mbr2502_2| pmid = 26794477 }}</ref><ref name=Browne>{{cite journal | last1 = Browne | first1 = M. W. | last2 = Cudeck | first2 = R. | year = 1992 | title = Alternative ways of assessing model fit | url = | journal = Sociological Methods and Research | volume = 21 | issue = | pages = 230–258 | doi = 10.1177/0049124192021002005 }}</ref>
*'''Root mean square error of approximation (RMSEA) fit index:''' RMSEA is an estimate of the discrepancy between the model and the data per degree of freedom for the model. Values less that .05 constitute good fit, values between 0.05 and 0.08 constitute acceptable fit, a values between 0.08 and 0.10 constitute marginal fit and values greater than 0.10 indicate poor fit .<ref name =Browne/><ref>Steiger, J. H. (1989). EzPATH: A supplementary module for SYSTAT andsygraph. Evanston, IL: SYSTAT</ref> An advantage of the RMSEA fit index is that it provides confidence intervals which allow researchers to compare a series of models with varying numbers of factors.
 
Line 55:
 
===Velicer's Minimum Average Partial test (MAP)===
Velicer's (1976) MAP test<ref name=Velicer/> “involves a complete principal components analysis followed by the examination of a series of matrices of partial correlations” (p.&nbsp;397). The squared correlation for Step “0” (see Figure 4) is the average squared off-diagonal correlation for the unpartialed correlation matrix. On Step 1, the first principal component and its associated items are partialed out. Thereafter, the average squared off-diagonal correlation for the subsequent correlation matrix is computed for Step 1. On Step 2, the first two principal components are partialed out and the resultant average squared off-diagonal correlation is again computed. The computations are carried out for k minus one steps (k representing the total number of variables in the matrix). Finally, the average squared correlations for all steps are lined up and the step number that resulted in the lowest average squared partial correlation determines the number of components or factors to retain (Velicer, 1976). By this method, components are maintained as long as the variance in the correlation matrix represents systematic variance, as opposed to residual or error variance. Although methodologically akin to principal components analysis, the MAP technique has been shown to perform quite well in determining the number of factors to retain in multiple simulation studies.<ref name =Ruscio/><ref name=Garrido>Garrido, L. E., & Abad, F. J., & Ponsoda, V. (2012). A new look at Horn's parallel analysis with ordinal variables. Psychological Methods. Advance online publication. doi:10.1037/a0030005</ref> However, in a very small minority of cases MAP may grossly overestimate the number of factors in a dataset for unknown reasons.<ref>{{cite journal | last1 = Warne | first1 = R. T. | last2 = Larsen | first2 = R. | year = 2014 | title = Evaluating a proposed modification of the Guttman rule for determinig the number of factors in an exploratory factor analysis. P | url = | journal = Sychological Test and Assessment Modeling | volume = 56 | issue = | pages = 104–123 }}</ref> This procedure is made available through SPSS's user interface. See Courtney (2013)<ref name="pareonline.net"/> for guidance. This is one of his five recommended modern procedures.
 
===Parallel analysis===
{{Main|Parallel analysis}}
To carry out the PA test, users compute the eigenvalues for the correlation matrix and plot the values from largest to smallest and then plot a set of random eigenvalues. The number of eigenvalues before the intersection points indicates how many factors to include in your model.<ref name=Humphreys>{{cite journal | last1 = Humphreys | first1 = L. G. | last2 = Montanelli | first2 = R. G. Jr | year = 1975 | title = An investigation of the parallel analysis criterion for determining the number of common factors | url = | journal = Multivariate Behavioral Research | volume = 10 | issue = 2| pages = 193–205 | doi = 10.1207/s15327906mbr1002_5 }}</ref><ref>{{cite journal|last=Horn|first=John L.|title=A rationale and test for the number of factors in factor analysis|journal=Psychometrika|date=1 June 1965|volume=30|issue=2|pages=179–185|doi=10.1007/BF02289447|pmid=14306381}}</ref><ref>{{cite journal|last=Humphreys|first=L. G.|author2=Ilgen, D. R.|title=Note On a Criterion for the Number of Common Factors|journal=Educational and Psychological Measurement|date=1 October 1969|volume=29|issue=3|pages=571–578|doi=10.1177/001316446902900303}}</ref> This procedure can be somewhat arbitrary (i.e. a factor just meeting the cutoff will be included and one just below will not).<ref name =Fabrigar/> Moreover, the method is very sensitive to sample size, with PA suggesting more factors in datasets with larger sample sizes.<ref>{{cite journal | last1 = Warne | first1 = R. G. | last2 = Larsen | first2 = R. | year = 2014 | title = Evaluating a proposed modification of the Guttman rule for determining the number of factors in an exploratory factor analysis | url = | journal = Psychological Test and Assessment Modeling | volume = 56 | issue = | pages = 104–123 }}</ref> Despite its shortcomings, this procedure performs very well in simulation studies and is one of Courtney's recommended procedures.<ref name="pareonline.net"/> PA has been [[Parallel_analysis#Implementation|implemented]] in a number of commonly used statistics programs such as R and SPSS.
 
===Ruscio and Roche's comparison data===