Content deleted Content added
revise cat, wikify: capitals, {{cn}} |
m Dating maintenance tags: {{Cn}} {{Clarify}} {{Context}} |
||
Line 1:
{{context|date=April 2012}}
In [[multivariate statistics]], '''exploratory factor analysis''' (EFA) is used to uncover the underlying structure of a relatively large set of variables. It is commonly used by researchers when developing a scale{{clarify|reason=undefined technical term|date=April 2012}} and serves to identify a set of [[Latent variable|latent constructs]] underlying a battery of measured variables.<ref>Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). "Evaluating the use of exploratory factor analysis in psychological research". ''Psychological Methods'', 4(3), 272-299.</ref> It should be used when the researcher has no a priori hypothesis about factors or patterns of measured variables.<ref>Finch, J. F., & West, S. G. (1997). "The investigation of personality structure: Statistical models". ''Journal of Research in Personality'', 31 (4), 439-485.</ref> ''Measured variables'' are any one of several attributes of people that may be observed and measured. An example of a measured variable would be one item on a scale. Researchers must{{cn|date=April 2012}} carefully consider the number of measured variables to include in the analysis. EFA procedures are more accurate when each factor is represented by multiple measured variables in the analysis. There should be at least 3 to 5 measured variables per factor.<ref>Maccallum, R. C. (1990). "The need for alternative measures of fit in covariance structure modeling". ''Multivariate Behavioral Research'', 25(2), 157-162.</ref>
The researcher's assumption when conducting EFA is that any indicator/measured variable may be associated with any factor. When developing a scale, researchers should use EFA first before moving on to [[confirmatory factor analysis]] (CFA). EFA requires the researcher to make a number of important decisions about how to conduct the analysis because there is no one set method.
==Fitting procedures==
Fitting procedures are used to estimate the factor loadings and unique variances of the model. There are several factor analysis fitting methods to choose from, however there is little information on all of their strengths and weaknesses and many don’t even have an exact name that is used consistently. Principal axis factoring (PAF) and [[maximum likelihood]] (ML) are two extraction methods that are generally recommended.{{cn|date=April 2012}} In general, ML or PAF give the best results, depending on whether data are normally-distributed or if the assumption of normality has been violated.
===Maximum likelihood (ML)===
The maximum likelihood method has many advantages in that it allows researchers to compute of a wide range of indexes of the [[goodness of fit]] of the model, it allows researchers to test the [[statistical significance]] of factor loadings, calculate correlations among factors and compute [[confidence interval]]s for these parameters.{{cn|date=April 2012}} ML is the best choice when data are normally distributed because “it allows for the computation of a wide range of indexes of the goodness of fit of the model [and] permits statistical significance testing of factor loadings and correlations among factors and the computation of confidence intervals”.{{cn|date=April 2012}} ML should not be used if the data are not normally distributed.
===Principal axis factoring (PAF)===
Called “principal” axis factoring because the first factor accounts for as much common variance as possible, then the second factor next most variance, and so on. PAF is a descriptive procedure so it is best to use when the focus is just on your sample and you do not plan to generalize the results beyond your sample.{{cn|date=April 2012}} An advantage of PAF is that it can be used when the assumption of normality has been violated . Another advantage of PAF is that it is less likely than ML to produce improper solutions. A downside of PAF is that it provides a limited range of goodness-of-fit indexes compared to ML and does not allow for the computation of confidence intervals and significance tests.
==Selecting the appropriate number of factors==
When selecting how many factors to include in a model, researchers must try to balance [[parsimony]] (a model with relatively few factors) and plausibility (that there are enough factors to adequately account for correlations among measured variables).{{cn|date=April 2012}} It is better to include too many factors (overfactoring) than too few factors (underfactoring).
''Overfactoring'' occurs when too many factors are included in a model. It is not as bad as underfactoring because major factors will usually be accurately represented and extra factors will have no measured variables load onto them. Still, it should be avoided because overfactoring may lead researchers to put forward constructs with little theoretical value.
Line 23:
===Scree plot===
Compute the eigenvalues for the correlation matrix and plot the values from largest to smallest. Examine the graph to determine the last substantial drop in the magnitude of eigenvalues. The number of plotted points before the last drop is the number of factors to include in the model. This method has been criticized because of its subjective nature (i.e., there is no clear objective definition of what constitutes a substantial drop).{{cn|date=April 2012}}
===Parallel analysis===
Line 29:
===Kaiser criterion===
Compute the eigenvalues for the correlation matrix and determine how many of these eigenvalues are greater than 1. This number is the number of factors to include in the model. A disadvantage of this procedure is that it is quite arbitrary (e.g. an eigenvalue of 1.01 is included whereas an eigenvalue of .99 is not). This procedure often leads to overfactoring and sometimes underfactoring.{{cn|date=April 2012}} Therefore, this procedure should not be used.{{cn|date=April 2012}}
===Model comparison===
Choose the best model from a series of models that differ in complexity. Researchers use goodness-of-fit measures to fit models beginning with a model with zero factors and gradually increase the number of factors. The goal is to ultimately choose a model that explains the data significantly better than simpler models (with fewer factors) and explains the data as well as more complex models (with more factors).
There are different methods to assess model fit:{{cn|date=April 2012}}
*'''Likelihood ratio statistic:''' Used to test the null hypothesis that a model has perfect model fit. It should be applied to models with an increasing number of factors until the result is nonsignificant, indicating that the model is not rejected as good model fit of the population. This statistic should be used with a large sample size and normally distributed data. There are some drawbacks to the likelihood ratio test. First, when there is a large sample size, even small discrepancies between the model and the data result in model rejection . When there is a small sample size, even large discrepancies between the model and data may not be significant, which leads to underfactoring . Another disadvantage of the likelihood ratio test is that the null hypothesis of perfect fit is an unrealistic standard
*'''Root mean square error of approximation (RMSEA) fit index:'''RMSEA is an estimate of the discrepancy between the model and the data per degree of freedom for the model.{{cn|date=April 2012}} Values less that .05 constitute good fit, values between 0.05 and 0.08 constitute acceptable fit, a values between 0.08 and 0.10 constitute marginal fit and values greater than 0.10 indicate poor fit . An advantage of the RMSEA fit index is that it provides confidence intervals which allow researchers to compare a series of models with varying numbers of factors.
==Factor rotation==
Line 45:
===Orthogonal rotation===
Orthogonal rotations constrain factors to be uncorrelated. Varimax is considered the best orthogonal rotation and consequently is used the most often in psychology research.{{cn|date=April 2012}} An advantage of orthogonal rotation is its simplicity and conceptual clarity, although there are several disadvantages. In the social sciences, there is often a theoretical basis for expecting constructs to be correlated, therefore orthogonal rotations may not be very realistic because it ignores this possibility. Also, because orthogonal rotations require factors to be oriented at 90° angles from one another (because the factors are uncorrelated), they are more likely to produce solutions with poor simple structure.
===Oblique rotation===
Oblique rotations permit correlations among factors; however it does not require that factors are correlated. If factors are not correlated, it will provide correlation estimates that are close to zero and produce a solution similar to orthogonal rotation . There are several oblique rotation procedures that are commonly used, such as the direct quartimin rotation , the promax rotation , and the Harris-Kaiser orthoblique rotation.{{cn|date=April 2012}} An advantage of oblique rotation is that it produces solutions with better simple structure because it allows factors to be oriented at different angles. Another advantage is that it produces estimates of correlations among factors.
==Factor interpretation==
Factor loadings are numerical values that indicate the strength and direction of a factor on a measured variable. Factor loadings indicate how strongly the factor influences the measured variable. In order to label the factors in the model, researchers should examine the factor pattern to see which items load highly on which factors and then determine what those items have in common.{{cn|date=April 2012}} Whatever the items have in common will indicate the meaning of the factor.
==Notes==
|