Structural equation modeling: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: pmc, page. | Use this bot. Report bugs. | Suggested by Jay8g | #UCB_toolbar
OAbot (talk | contribs)
m Open access bot: doi updated in citation with #oabot.
Line 210:
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients.<ref name="Millsap07">Millsap, R.E. (2007) “Structural equation modeling made difficult.” Personality and Individual Differences. 42: 875-881.</ref> Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.<ref name="HP-RCLB05"/><ref name="EHR82">Entwisle, D.R.; Hayduk, L.A.; Reilly, T.W. (1982) Early Schooling: Cognitive and Affective Outcomes. Baltimore: Johns Hopkins University Press.</ref><ref name="Hayduk94">Hayduk, L.A. (1994). “Personal space: Understanding the simplex model.” Journal of Nonverbal Behavior., 18 (3): 245-260.</ref><ref name="HSR97">Hayduk, L.A.; Stratkotter, R.; Rovers, M.W. (1997) “Sexual Orientation and the Willingness of Catholic Seminary Students to Conform to Church Teachings.” Journal for the Scientific Study of Religion. 36 (3): 455-467.</ref>
 
The multiple ways of conceptualizing PLS models<ref name="RSR17">{{cite journal | doi=10.15358/0344-1369-2017-3-4 | title=On Comparing Results from CB-SEM and PLS-SEM: Five Perspectives and Five Recommendations | date=2017 | last1=Rigdon | first1=Edward E. | last2=Sarstedt | first2=Marko | last3=Ringle | first3=Christian M. | journal=Marketing ZFP | volume=39 | issue=3 | pages=4–16 | doi-access=free }}</ref> complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on ''R''<sup>2</sup> or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle<ref name="RSR17"/> point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
 
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term ''causal model'' must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.<ref name="Pearl09" />
Line 221:
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports.<ref name="Hayduk14b"/> The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models.<ref name="Mulaik09">Mulaik, S.A. (2009) Foundations of Factor Analysis (second edition). Chapman and Hall/CRC. Boca Raton, pages 130-131.</ref> The comments by Bollen and Pearl regarding myths about causality in the context of SEM<ref name="BP13" /> reinforced the centrality of causal thinking in the context of SEM.
 
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007),<ref name="LH07">Levy, R.; Hancock, G.R. (2007) “A framework of statistical tests for comparing mean and covariance structure models.” Multivariate Behavioral Research. 42(1): 33-66.</ref> for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016)<ref name="Kline16"/> remain disturbingly weak in their presentation of model testing.<ref name="Hayduk18">{{cite journal | doi=10.25336/csp29397 | title=Review essay on Rex B. Kline's Principles and Practice of Structural Equation Modeling: Encouraging a fifth edition | date=2018 | last1=Hayduk | first1=Leslie | journal=Canadian Studies in Population | volume=45 | issue=3–4 | page=154 | doi-access=free }}</ref> Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
 
An additional controversy that touched the fringes of the previous controversies awaits ignition.{{cn|date=March 2024}} Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012)<ref name="HL12"/> discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time,<ref name="EHR82"/> but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.