Approximate Bayesian computation: Difference between revisions

Content deleted Content added
Tags: Mobile edit Mobile web edit Advanced mobile edit
Line 217:
A number of [[Heuristic (computer science)|heuristic approaches]] to the quality control of ABC have been proposed, such as the quantification of the fraction of parameter variance explained by the summary statistics.<ref name="Bertorelle" /> A common class of methods aims at assessing whether or not the inference yields valid results, regardless of the actually observed data. For instance, given a set of parameter values, which are typically drawn from the prior or the posterior distributions for a model, one can generate a large number of artificial datasets. In this way, the quality and robustness of ABC inference can be assessed in a controlled setting, by gauging how well the chosen ABC inference method recovers the true parameter values, and also models if multiple structurally different models are considered simultaneously.
 
Another class of methods assesses whether the inference was successful in light of the given observed data, for example, by comparing the [[posterior predictive distribution]] of summary statistics to the summary statistics observed.<ref name="Bertorelle" /> Beyond that, [[Cross-validation (statistics)|cross-validation]] techniques<ref name="Arlot" /> and [[Predictive analytics|predictive checks]]<ref name="Dawid" /><ref name="Vehtari" /> represent promising future strategies to evaluate the stability and out-of-sample predictive validity of ABC inferences. This is particularly important when modeling large data sets, because then the posterior support of a particular model can appear overwhelmingly conclusive, even if all proposed models in fact are poor representations of the stochastic system underlying the observation data. Out-of-sample predictive checks can reveal potential systematic biases within a model and provide clues on to how to improve its structure or parametrization.
 
Fundamentally novel approaches for model choice that incorporate quality control as an integral step in the process have recently been proposed. ABC allows, by construction, estimation of the discrepancies between the observed data and the model predictions, with respect to a comprehensive set of statistics. These statistics are not necessarily the same as those used in the acceptance criterion. The resulting discrepancy distributions have been used for selecting models that are in agreement with many aspects of the data simultaneously,<ref name="Ratmann" /> and model inconsistency is detected from conflicting and co-dependent summaries. Another quality-control-based method for model selection employs ABC to approximate the effective number of model parameters and the deviance of the posterior predictive distributions of summaries and parameters.<ref name="Francois" /> The deviance information criterion is then used as measure of model fit. It has also been shown that the models preferred based on this criterion can conflict with those supported by [[Bayes factor]]s. For this reason, it is useful to combine different methods for model selection to obtain correct conclusions.