Content deleted Content added
Reverted 1 edit by Robinchen90 (talk): Rv COI / refspam |
|||
Line 5:
[[File:Example SEM of Human Intelligence.png|alt=An example structural equation model pre-estimation|thumb|336x336px|Figure 2. An example structural equation model before estimation. Similar to Figure 1 but without standardized values and fewer items. Because intelligence and academic performance are merely imagined or theory-postulated variables, their precise scale values are unknown, though the model specifies that each latent variable's values must fall somewhere along the observable scale possessed by one of the indicators. The 1.0 effect connecting a latent to an indicator specifies that each real unit increase or decrease in the latent variable's value results in a corresponding unit increase or decrease in the indicator's value. It is hoped a good indicator has been chosen for each latent, but the 1.0 values do not signal perfect measurement because this model also postulates that there are other unspecified entities causally impacting the observed indicator measurements, thereby introducing measurement error. This model postulates that separate measurement errors influence each of the two indicators of latent intelligence, and each indicator of latent achievement. The unlabeled arrow pointing to academic performance acknowledges that things other than intelligence can also influence academic performance.]]
'''Structural equation modeling''' ('''SEM''') is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology,<ref name="BM08">{{cite book | doi=10.4135/9781412953948.n443 | chapter=Structural Equation Modeling | title=Encyclopedia of Epidemiology | date=2008 | isbn=978-1-4129-2816-8 }}</ref> business,<ref name="Shelley06">{{cite book | doi=10.4135/9781412939584.n544 | chapter=Structural Equation Modeling | title=Encyclopedia of Educational Leadership and Administration | date=2006 | isbn=978-0-7619-3087-7 }}</ref> and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,".<ref>{{Cite web |title=Structural Equation Modeling - an overview {{!}} ScienceDirect Topics |url=https://www.sciencedirect.com/topics/neuroscience/structural-equation-modeling#:~:text=Structural%20equation%20modeling%20can%20be,underlying%20conceptual%20or%20theoretical%20model. |access-date=2024-11-15 |website=www.sciencedirect.com}}</ref>
SEM involves a model representing how various aspects of some [[phenomenon]] are thought to [[Causality|causally]] connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using ''[[equation]]s'' but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.<ref name="Pearl09">Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Second edition. New York: Cambridge University Press.</ref>
Line 23:
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates.<ref name="HG00a">Hayduk, L.; Glaser, D.N. (2000) "Jiving the Four-Step, Waltzing Around Factor Analysis, and Other Serious Fun". Structural Equation Modeling. 7 (1): 1-35.</ref><ref name="HG00b">Hayduk, L.; Glaser, D.N. (2000) "Doing the Four-Step, Right-2-3, Wrong-2-3: A Brief Reply to Mulaik and Millsap; Bollen; Bentler; and Herting and Costner". Structural Equation Modeling. 7 (1): 111-123.</ref> Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain.<ref name="Westland15">Westland, J.C. (2015). Structural Equation Modeling: From Paths to Networks. New York, Springer.</ref><ref>{{Cite journal|last=Christ|first=Carl F.|date=1994|title=The Cowles Commission's Contributions to Econometrics at Chicago, 1939-1955|url=https://www.jstor.org/stable/2728422|journal=Journal of Economic Literature|volume=32|issue=1|pages=30–59|jstor=2728422|issn=0022-0515}}</ref> Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs).<ref name="Pearl09"/> Discussions comparing and contrasting various SEM approaches are available<ref name="Imbens20">Imbens, G.W. (2020). "Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics". Journal of Economic Literature. 58 (4): 11-20-1179.</ref><ref name="BP13">{{cite book | doi=10.1007/978-94-007-6094-3_15 | chapter=Eight Myths About Causality and Structural Equation Models | title=Handbook of Causal Analysis for Social Research | series=Handbooks of Sociology and Social Research | date=2013 | last1=Bollen | first1=Kenneth A. | last2=Pearl | first2=Judea | pages=301–328 | isbn=978-94-007-6093-6 }}</ref> highlighting disciplinary differences in data structures and the concerns motivating economic models.
[[Judea Pearl]]<ref name="Pearl09" /> extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.<ref name="BP13" />
Line 227:
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007),<ref name="LH07">Levy, R.; Hancock, G.R. (2007) “A framework of statistical tests for comparing mean and covariance structure models.” Multivariate Behavioral Research. 42(1): 33-66.</ref> for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016)<ref name="Kline16"/> remain disturbingly weak in their presentation of model testing.<ref name="Hayduk18">{{cite journal | doi=10.25336/csp29397 | title=Review essay on Rex B. Kline's Principles and Practice of Structural Equation Modeling: Encouraging a fifth edition | date=2018 | last1=Hayduk | first1=Leslie | journal=Canadian Studies in Population | volume=45 | issue=3–4 | page=154 | doi-access=free }}</ref> Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
An additional controversy that touched the fringes of the previous controversies awaits ignition.{{
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.
Line 305:
<ref name="Ing2024">{{cite journal |title=Integrating Multi-Modal Cancer Data Using Deep Latent Variable Path Modelling |author=Alex James Ing, Alvaro Andrades, Marco Raffaele Cosenza, Jan Oliver Korbel |journal=bioRxiv |date=2024-06-13 |url=https://www.biorxiv.org/content/10.1101/2024.06.13.598616v1 |doi=10.1101/2024.06.13.598616 }}</ref>
}}
|