Structural equation modeling: Difference between revisions

Content deleted Content added
Tags: Mobile edit Mobile web edit
ce
Line 5:
[[File:Example SEM of Human Intelligence.png|alt=An example structural equation model pre-estimation|thumb|336x336px|Figure 2. An example structural equation model before estimation. Similar to Figure 1 but without standardized values and fewer items. Because intelligence and academic performance are merely imagined or theory-postulated variables, their precise scale values are unknown, though the model specifies that each latent variable's values must fall somewhere along the observable scale possessed by one of the indicators. The 1.0 effect connecting a latent to an indicator specifies that each real unit increase or decrease in the latent variable's value results in a corresponding unit increase or decrease in the indicator's value. It is hoped a good indicator has been chosen for each latent, but the 1.0 values do not signal perfect measurement because this model also postulates that there are other unspecified entities causally impacting the observed indicator measurements, thereby introducing measurement error. This model postulates that separate measurement errors influence each of the two indicators of latent intelligence, and each indicator of latent achievement. The unlabeled arrow pointing to academic performance acknowledges that things other than intelligence can also influence academic performance.]]
 
'''Structural equation modeling''' ('''SEM''') is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology,<ref name="BM08">Boslaugh, S.; McNutt, L-A. (2008). "Structural Equation Modeling". Encyclopedia of Epidemiology. {{doi |10.4135/9781412953948.n443, ISBN 978-1-4129-2816-8.}}</ref> business,<ref name="Shelley06">Shelley, M. C. (2006). "Structural Equation Modeling". Encyclopedia of Educational Leadership and Administration. {{doi |10.4135/9781412939584.n544, ISBN 978-0-7619-3087-7.}}</ref> and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself.
 
SEM involves a model representing how various aspects of some [[phenomenon]] are thought to [[Causality|causally]] connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using ''[[equation]]s'' but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.<ref name="Pearl09">Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Second edition. New York: Cambridge University Press.</ref>
Line 17:
== History ==
 
Structural equation modeling (SEM) began differentiating itself from correlation and regression when [[Sewall Wright]] provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables.<ref name="Wright21">Wright, Sewall. (1921) "Correlation and causation". Journal of Agricultural Research. 20: 557-585.</ref><ref name="Wright34">Wright, Sewall. (1934) "The method of path coefficients". The Annals of Mathematical Statistics. 5 (3): 161-215. {{doi: |10.1214/aoms/1177732676.}}</ref><ref name="Wolfle99">Wolfle, L.M. (1999) "Sewall Wright on the method of path coefficients: An annotated bibliography" Structural Equation Modeling: 6(3):280-291.</ref> The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book<ref name="Duncan75">Duncan, Otis Dudley. (1975). Introduction to Structural Equation Models. New York: Academic Press. ISBN 0-12-224150-9.</ref> and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk<ref name="Hayduk87"/> provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).<ref name="Bollen89">Bollen, K. (1989). Structural Equations with Latent Variables. New York, Wiley. ISBN 0-471-01171-1.</ref>
 
Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early [[Cowles Foundation|Cowles Commission]] work on [[Simultaneous equations model|simultaneous equations]] estimation centered on Koopman and Hood's (1953) algorithms from [[transport economics]] and optimal routing, with [[maximum likelihood estimation]], and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL<ref name="JGvT70">Jöreskog, Karl; Gruvaeus, Gunnar T.; van Thillo, Marielle. (1970) ACOVS: A General Computer Program for Analysis of Covariance Structures. Princeton, N.J.; Educational Testing Services.</ref><ref name=":0">{{Cite journal|last1=Jöreskog|first1=Karl Gustav|last2=van Thillo|first2=Mariella|date=1972|title=LISREL: A General Computer Program for Estimating a Linear Structural Equation System Involving Multiple Indicators of Unmeasured Variables|url=https://files.eric.ed.gov/fulltext/ED073122.pdf|journal=Research Bulletin: Office of Education|volume=ETS-RB-72-56|via=US Government}}</ref><ref name="JS76">Jöreskog, Karl; Sorbom, Dag. (1976) LISREL III: Estimation of Linear Structural Equation Systems by Maximum Likelihood Methods. Chicago: National Educational Resources, Inc.</ref> embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
Line 23:
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates.<ref name="HG00a">Hayduk, L.; Glaser, D.N. (2000) "Jiving the Four-Step, Waltzing Around Factor Analysis, and Other Serious Fun". Structural Equation Modeling. 7 (1): 1-35.</ref><ref name="HG00b">Hayduk, L.; Glaser, D.N. (2000) "Doing the Four-Step, Right-2-3, Wrong-2-3: A Brief Reply to Mulaik and Millsap; Bollen; Bentler; and Herting and Costner". Structural Equation Modeling. 7 (1): 111-123.</ref> Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
 
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain.<ref name="Westland15">Westland, J.C. (2015). Structural Equation Modeling: From Paths to Networks. New York, Springer.</ref><ref>{{Cite journal|last=Christ|first=Carl F.|date=1994|title=The Cowles Commission's Contributions to Econometrics at Chicago, 1939-1955|url=https://www.jstor.org/stable/2728422|journal=Journal of Economic Literature|volume=32|issue=1|pages=30–59|jstor=2728422|issn=0022-0515}}</ref> The economic version of SEM can be seen in SEMNET discussions of endogeneity, and in the heat produced as Judea Pearl's approach to causality via directed acyclic graphs (DAG's) rubs against economic approaches to modeling.<ref name="Pearl09"/> Discussions comparing and contrasting various SEM approaches are available<ref name="Imbens20">Imbens, G.W. (2020). "Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics". Journal of Economic Literature. 58 (4): 11-20-1179.</ref><ref name="BP13">Bollen, K.A.; Pearl, J. (2013) "Eight myths about causality and structural equation models." In S.L. Morgan (ed.) Handbook of Causal Analysis for Social Research, Chapter 15, 301–328, Springer. {{doi:|10.1007/978-94-007-6094-3_15}}</ref> but disciplinary differences in data structures and the concerns motivating economic models make reunion unlikely. Pearl<ref name=Pearl09 /> extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.<ref name=BP13 />
 
SEM analyses are popular in the social sciences because computer programs make it possible to estimate complicated causal structures, but the complexity of the models introduces substantial variability in the quality of the results. Some, but not all, results are obtained without the "inconvenience" of understanding experimental design, statistical control, the consequences of sample size, and other features contributing to good research design.{{Citation needed|date=July 2023}}
Line 44:
* and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).
 
The latent level of a model is composed of [[Exogenous and endogenous variables|''endogenous'' and ''exogenous'' variables]]. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether [[Exogenous and endogenous variables|exogenous or endogenous]], is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.<ref name="BMvH03">Borsboom, D.; Mellenbergh, G. J.; van Heerden, J. (2003). "The theoretical status of latent variables." Psychological Review, 110 (2): 203–219. https://{{doi.org/|10.1037/0033-295X.110.2.203 }}</ref>
 
The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations.<ref name="JS76"/><ref name="Hayduk87"/><ref name="Bollen89"/><ref name="Kline16" >Kline, Rex. (2016) Principles and Practice of Structural Equation Modeling (4th ed). New York, Guilford Press. ISBN 978-1-4625-2334-4</ref> Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Line 86:
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
 
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution."<ref name="MacCallum1986" /> Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.<ref name="HL12">Hayduk, L. A.; Littvay, L. (2012) "Should researchers use single indicators, best indicators, or multiple indicators in structural equation models?" BMC Medical Research Methodology, 12 (159): 1-17. {{doi: |10,.1186/1471-2288-12-159}}</ref>
 
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the {{math|χ<sup>2</sup>}} test can have (though it does not always have) considerable power to detect model misspecification.<ref name="BMKAG02">Browne, M.W.; MacCallum, R.C.; Kim, C.T.; Andersen, B.L.; Glaser, R. (2002) "When fit indices and residuals are incompatible." Psychological Methods. 7: 403-421.</ref> The probability accompanying a {{math|χ<sup>2</sup>}} test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small {{math|χ<sup>2</sup>}} probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to {{math|χ<sup>2</sup>}}. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres<ref name="HP-RCLB05">Hayduk, L. A.; Pazderka-Robinson, H.; Cummings, G.G.; Levers, M-J. D.; Beres, M. A. (2005) "Structural equation model testing and the quality of natural killer cell activity measurements." BMC Medical Research Methodology. 5 (1): 1-9.{{cite journal |doi: =10.1186/1471-2288-5-1.}} Note the correction of .922 to .992, and the correction of .944 to .994 in the Hayduk, et al. Table 1.</ref> who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of {{math|χ<sup>2</sup>}} testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, ___location, or seriousness of problems in a model's specification.<ref name="Hayduk14a">Hayduk, L.A. (2014a) "Seeing perfectly-fitting factor models that are causally misspecified: Understanding that close-fitting models can be worse." Educational and Psychological Measurement. 74 (6): 905-926. {{doi: |10.1177/0013164414527449}}</ref>
 
Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that {{math|χ<sup>2</sup>}} increases (and hence {{math|χ<sup>2</sup>}} probability decreases) with increasing sample size (N). There are two mistakes in discounting {{math|χ<sup>2</sup>}} on this basis. First, for proper models, {{math|χ<sup>2</sup>}} does not increase with increasing N,<ref name="Hayduk14b"/> so if {{math|χ<sup>2</sup>}} increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, {{math|χ<sup>2</sup>}} increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by {{math|χ<sup>2</sup>}},<ref name="Hayduk14a"/> so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration.<ref name="Barrett07"/><ref name="Hayduk14b"/> The {{math|χ<sup>2</sup>}} model test, possibly adjusted,<ref name="SB94">Satorra, A.; and Bentler, P. M. (1994) “Corrections to test statistics and standard errors in covariance structure analysis”. In A. von Eye and C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousand Oaks, CA: Sage.</ref> is the strongest available structural equation model test.
Line 110:
Some of the more commonly used fit statistics include
* [[Chi-square test|Chi-square]]
** A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.<ref name="Hayduk14b">Hayduk, L.A. (2014b) "Shame for disrespecting evidence: The personal consequences of insufficient respect for structural equation model testing. BMC: Medical Research Methodology, 14 (124): 1-10 DOI {{doi|10.1186/1471-2288-14-24 http://www.biomedcentral.com/1471-2288/14/124}}</ref>
* [[Akaike information criterion]] (AIC)
** An index of relative model fit: The preferred model is the one with the lowest AIC value.
Line 210:
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients.<ref name="Millsap07">Millsap, R.E. (2007) “Structural equation modeling made difficult.” Personality and Individual Differences. 42: 875-881.</ref> Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.<ref name="HP-RCLB05"/><ref name="EHR82">Entwisle, D.R.; Hayduk, L.A.; Reilly, T.W. (1982) Early Schooling: Cognitive and Affective Outcomes. Baltimore: Johns Hopkins University Press.</ref><ref name="Hayduk94">Hayduk, L.A. (1994). “Personal space: Understanding the simplex model.” Journal of Nonverbal Behavior., 18 (3): 245-260.</ref><ref name="HSR97">Hayduk, L.A.; Stratkotter, R.; Rovers, M.W. (1997) “Sexual Orientation and the Willingness of Catholic Seminary Students to Conform to Church Teachings.” Journal for the Scientific Study of Religion. 36 (3): 455-467.</ref>
 
The multiple ways of conceptualizing PLS models<ref name="RSR17">Rigdon, E.E.; Sarstedt, M.; Ringle, M. (2017) "On Comparing Results from CB-SEM and PLS-SEM: Five Perspectives and Five Recommendations". Marketing ZFP. 39 (3): 4–16. {{doi:|10.15358/0344-1369-2017-3-4}}</ref> complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on ''R''<sup>2</sup> or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle<ref name="RSR17"/> point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
 
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term ''causal model'' must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.<ref name="Pearl09" />
Line 221:
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports.<ref name="Hayduk14b"/> The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models.<ref name="Mulaik09">Mulaik, S.A. (2009) Foundations of Factor Analysis (second edition). Chapman and Hall/CRC. Boca Raton, pages 130-131.</ref> The comments by Bollen and Pearl regarding myths about causality in the context of SEM<ref name="BP13" /> reinforced the centrality of causal thinking in the context of SEM.
 
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007),<ref name="LH07">Levy, R.; Hancock, G.R. (2007) “A framework of statistical tests for comparing mean and covariance structure models.” Multivariate Behavioral Research. 42(1): 33-66.</ref> for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016)<ref name="Kline16"/> remain disturbingly weak in their presentation of model testing.<ref name="Hayduk18">Hayduk, L.A. (2018) “Review essay on Rex B. Kline’s Principles and Practice of Structural Equation Modeling: Encouraging a fifth edition.” Canadian Studies in Population. 45 (3-4): 154-178. DOI {{doi|10.25336/csp29397}}</ref> Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
 
An additional controversy that touched the fringes of the previous controversies awaits ignition.{{cn|date=March 2024}} Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012)<ref name="HL12"/> discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time,<ref name="EHR82"/> but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Line 234:
* Deep Path Modelling <ref name="Ing2024"/>
* Exploratory Structural Equation Modeling <ref>{{Cite journal |last1=Marsh |first1=Herbert W. |last2=Morin |first2=Alexandre J.S. |last3=Parker |first3=Philip D. |last4=Kaur |first4=Gurvinder |date=2014-03-28 |title=Exploratory Structural Equation Modeling: An Integration of the Best Features of Exploratory and Confirmatory Factor Analysis |url=https://www.annualreviews.org/doi/10.1146/annurev-clinpsy-032813-153700 |journal=Annual Review of Clinical Psychology |language=en |volume=10 |issue=1 |pages=85–110 |doi=10.1146/annurev-clinpsy-032813-153700 |pmid=24313568 |issn=1548-5943}}</ref>
* Fusion validity models<ref name="HEH19">Hayduk, L.A.; Estabrooks, C.A.; Hoben, M. (2019). “Fusion validity: Theory-based scale assessment via causal structural equation modeling.” Frontiers in Psychology, 10: 1139. {{doi: |10.3389/psyg.2019.01139}}</ref>
* [[Item response theory]] models {{citation needed|date=July 2023}}
* [[Latent class models]] {{citation needed|date=July 2023}}
Line 247:
* Random intercepts models {{citation needed|date=July 2023}}
* Structural Equation Model Trees {{citation needed|date=July 2023}}
* Structural Equation [[Multidimensional scaling]]<ref>{{Cite journal |last=Vera |first=José Fernando |last2=Mair |first2=Patrick |date=2019-09-03 |title=SEMDS: An R Package for Structural Equation Multidimensional Scaling |url=https://www.tandfonline.com/doi/full/10.1080/10705511.2018.1561292 |journal=Structural Equation Modeling: A Multidisciplinary Journal |language=en |volume=26 |issue=5 |pages=803–818 |doi=10.1080/10705511.2018.1561292 |issn=1070-5511}}</ref>
 
== Software ==