Content deleted Content added
m Open access bot: doi, hdl, url-access=subscription updated in citation with #oabot. |
m replaced: causally-disconnected → causally disconnected (3) |
||
Line 63:
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).
Line 90:
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the {{math|χ<sup>2</sup>}} test can have (though it does not always have) considerable power to detect model misspecification.<ref name="BMKAG02">{{cite journal |last1=Browne |first1=Michael W. |last2=MacCallum |first2=Robert C. |last3=Kim |first3=Cheong-Tag |last4=Andersen |first4=Barbara L. |last5=Glaser |first5=Ronald |title=When fit indices and residuals are incompatible. |journal=Psychological Methods |date=2002 |volume=7 |issue=4 |pages=403–421 |doi=10.1037/1082-989x.7.4.403 |pmid=12530701 |pmc=2435310 }}</ref> The probability accompanying a {{math|χ<sup>2</sup>}} test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small {{math|χ<sup>2</sup>}} probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to {{math|χ<sup>2</sup>}}. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres<ref name="HP-RCLB05">{{cite journal |doi=10.1186/1471-2288-5-1|doi-access=free |title=Structural equation model testing and the quality of natural killer cell activity measurements |date=2005 |last1=Hayduk |first1=Leslie A. |last2=Pazderka-Robinson |first2=Hannah |last3=Cummings |first3=Greta G. |last4=Levers |first4=Merry-Jo D. |last5=Beres |first5=Melanie A. |journal=BMC Medical Research Methodology |volume=5 |page=1 |pmid=15636638 |pmc=546216 }} Note the correction of .922 to .992, and the correction of .944 to .994 in the Hayduk, et al. Table 1.</ref> who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of {{math|χ<sup>2</sup>}} testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, ___location, or seriousness of problems in a model's specification.<ref name="Hayduk14a">{{cite journal | doi=10.1177/0013164414527449 | title=Seeing Perfectly Fitting Factor Models That Are Causally Misspecified | date=2014 | last1=Hayduk | first1=Leslie | journal=Educational and Psychological Measurement | volume=74 | issue=6 | pages=905–926 }}</ref>
|