Content deleted Content added
m Added a few references of relevance |
m →Software: HTTP to HTTPS for SourceForge |
||
(21 intermediate revisions by 10 users not shown) | |||
Line 2:
{{Bayesian statistics}}
'''Approximate Bayesian computation''' ('''ABC''') constitutes a class of [[Computational science|computational methods]] rooted in [[Bayesian statistics]] that can be used to estimate the posterior distributions of model parameters.
In all model-based [[statistical inference]], the [[likelihood|likelihood function]] is of central importance, since it expresses the probability of the observed data under a particular [[statistical model]], and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate.
Line 17:
Although Diggle and Gratton's approach had opened a new frontier, their method was not yet exactly identical to what is now known as ABC, as it aimed at approximating the likelihood rather than the posterior distribution. An article of [[Simon Tavaré]] and co-authors was first to propose an ABC algorithm for posterior inference.<ref name="Tavare" /> In their seminal work, inference about the genealogy of DNA sequence data was considered, and in particular the problem of deciding the posterior distribution of the time to the [[most recent common ancestor]] of the sampled individuals. Such inference is analytically intractable for many demographic models, but the authors presented ways of simulating coalescent trees under the putative models. A sample from the posterior of model parameters was obtained by accepting/rejecting proposals based on comparing the number of segregating sites in the synthetic and real data. This work was followed by an applied study on modeling the variation in human Y chromosome by [[Jonathan K. Pritchard]] and co-authors using the ABC method.<ref name="Pritchard1999" /> Finally, the term approximate Bayesian computation was established by Mark Beaumont and co-authors,<ref name="Beaumont2002" /> extending further the ABC methodology and discussing the suitability of the ABC-approach more specifically for problems in population genetics. Since then, ABC has spread to applications outside population genetics, such as systems biology, epidemiology, and [[phylogeography]].
Approximate Bayesian computation can be understood as a kind of Bayesian version of [[indirect inference]].<ref>
Several efficient Monte Carlo based approaches have been developed to perform sampling from the ABC posterior distribution for purposes of estimation and prediction problems. A popular choice is the SMC Samplers algorithm <ref>{{Cite journal |last1=Del Moral |first1=Pierre |last2=Doucet |first2=Arnaud |last3=Jasra |first3=Ajay |date=2006 |title=Sequential Monte Carlo Samplers |url=https://www.jstor.org/stable/3879283 |journal=Journal of the Royal Statistical Society. Series B (Statistical Methodology) |volume=68 |issue=3 |pages=411–436 |doi=10.1111/j.1467-9868.2006.00553.x |jstor=3879283 |issn=1369-7412|arxiv=cond-mat/0212648 }}</ref><ref>{{Cite journal |last1=Del Moral |first1=Pierre |last2=Doucet |first2=Arnaud |last3=Peters |first3=Gareth |date=2004 |title=Sequential Monte Carlo Samplers CUED Technical Report |url=https://www.ssrn.com/abstract=3841065 |journal=SSRN Electronic Journal |language=en |doi=10.2139/ssrn.3841065 |issn=1556-5068|url-access=subscription }}</ref><ref>{{Cite journal |last=Peters |first=Gareth |date=2005 |title=Topics in Sequential Monte Carlo Samplers |url=https://www.ssrn.com/abstract=3785582 |journal=SSRN Electronic Journal |language=en |doi=10.2139/ssrn.3785582 |issn=1556-5068|url-access=subscription }}</ref> adapted to the ABC context in the method (SMC-ABC).<ref>{{Cite journal |last1=Sisson |first1=S. A. |last2=Fan |first2=Y. |last3=Tanaka |first3=Mark M. |date=2007-02-06 |title=Sequential Monte Carlo without likelihoods |journal=Proceedings of the National Academy of Sciences |language=en |volume=104 |issue=6 |pages=1760–1765 |doi=10.1073/pnas.0607208104 |doi-access=free |issn=0027-8424 |pmc=1794282 |pmid=17264216|bibcode=2007PNAS..104.1760S }}</ref><ref name="Peters 2009"/><ref>{{Cite journal |last1=Peters |first1=G. W. |last2=Sisson |first2=S. A. |last3=Fan |first3=Y. |date=2012-11-01 |title=Likelihood-free Bayesian inference for α-stable models |url=https://www.sciencedirect.com/science/article/pii/S0167947310003786 |journal=Computational Statistics & Data Analysis |series=1st issue of the Annals of Computational and Financial Econometrics |volume=56 |issue=11 |pages=3743–3756 |doi=10.1016/j.csda.2010.10.004 |issn=0167-9473|url-access=subscription }}</ref><ref>{{Cite journal |last1=Peters |first1=Gareth W. |last2=Wüthrich |first2=Mario V. |last3=Shevchenko |first3=Pavel V. |date=2010-08-01 |title=Chain ladder method: Bayesian bootstrap versus classical bootstrap |url=https://www.sciencedirect.com/science/article/pii/S0167668710000351 |journal=Insurance: Mathematics and Economics |volume=47 |issue=1 |pages=36–51 |doi=10.1016/j.insmatheco.2010.03.007 |arxiv=1004.2548 |issn=0167-6687}}</ref>
==Method==
Line 28 ⟶ 30:
where <math>p(\theta|D)</math> denotes the posterior, <math>p(D|\theta)</math> the likelihood, <math>p(\theta)</math> the prior, and <math>p(D)</math> the evidence (also referred to as the [[marginal likelihood]] or the prior predictive probability of the data). Note that the denominator <math>p(D)</math> is normalizing the total probability of the posterior density <math>p(\theta|D)</math> to one and can be calculated that way.
The prior represents beliefs or knowledge (such as
===The ABC rejection algorithm===
All ABC-based methods approximate the likelihood function by simulations, the outcomes of which are compared with the observed data.<ref>{{Cite journal |last=Hunter |first=Dawn |date=2006-12-08 |title=Bayesian inference, Monte Carlo sampling and operational risk |url=https://www.risk.net/journal-of-operational-risk/2160915/bayesian-inference-monte-carlo-sampling-and-operational-risk |journal=Journal of Operational Risk |volume=1 |issue=3 |pages=27–50 |language=en |doi=10.21314/jop.2006.014|url-access=subscription }}</ref><ref name="Peters 2009"/><ref name="Beaumont2010" /><ref name="Bertorelle" /><ref name="Csillery" /> More specifically, with the ABC rejection algorithm — the most basic form of ABC — a set of parameter points is first sampled from the prior distribution. Given a sampled parameter point <math>\hat{\theta}</math>, a data set <math>\hat{D}</math> is then simulated under the statistical model <math>M</math> specified by <math>\hat{\theta}</math>. If the generated <math>\hat{D}</math> is too different from the observed data <math>D</math>, the sampled parameter value is discarded. In precise terms, <math>\hat{D}</math> is accepted with tolerance <math>\epsilon \ge 0</math> if:
:<math>\rho (\hat{D},D)\le\epsilon</math>,
Line 113 ⟶ 115:
The posterior probabilities are obtained via ABC with large <math>n</math> by utilizing the summary statistic (with <math>\epsilon = 0 </math> and <math>\epsilon = 2 </math>) and the full data sequence (with <math>\epsilon = 0 </math>). These are compared with the true posterior, which can be computed exactly and efficiently using the [[Viterbi algorithm]]. The summary statistic utilized in this example is not sufficient, as the deviation from the theoretical posterior is significant even under the stringent requirement of <math>\epsilon = 0 </math>. A much longer observed data sequence would be needed to obtain a posterior concentrated around <math>\theta = 0.25</math>, the true value of <math>\theta</math>.
This example application of ABC uses simplifications for illustrative purposes. More realistic applications of ABC are available in a growing number of peer-reviewed articles.<ref name="Beaumont2010" /><ref name="Bertorelle" /><ref name="Csillery" /><ref name="Marin11" /><ref>{{cite book |first=Christian P. |last=Robert |chapter=Approximate Bayesian Computation: A Survey on Recent Results |year=2016 |editor-last=Cools |editor-first=R. |editor2-last=Nuyens |editor2-first=D. |title=Monte Carlo and Quasi-Monte Carlo Methods |pages=185–205 |series=Springer Proceedings in Mathematics & Statistics |volume=163 |isbn=978-3-319-33505-6 |doi=10.1007/978-3-319-33507-0_7 }}</ref>
==Model comparison with ABC==
Line 180 ⟶ 182:
===Approximation of the posterior===
A non-negligible <math>\epsilon</math> comes with the price that one samples from <math>p(\theta|\rho(\hat{D},D)\le\epsilon)</math> instead of the true posterior <math>p(\theta|D)</math>. With a sufficiently small tolerance, and a sensible distance measure, the resulting distribution <math>p(\theta|\rho(\hat{D},D)\le\epsilon)</math> should often approximate the actual target distribution <math>p(\theta|D)</math> reasonably well. On the other hand, a tolerance that is large enough that every point in the parameter space becomes accepted will yield a replica of the prior distribution. There are empirical studies of the difference between <math>p(\theta|\rho(\hat{D},D)\le\epsilon)</math> and <math>p(\theta|D)</math> as a function of <math>\epsilon</math>,<ref name="Sisson" />
As an attempt to correct some of the error due to a non-zero <math>\epsilon</math>, the usage of local linear weighted regression with ABC to reduce the variance of the posterior estimates has been suggested.<ref name="Beaumont2002" /> The method assigns weights to the parameters according to how well simulated summaries adhere to the observed ones and performs linear regression between the summaries and the weighted parameters in the vicinity of observed summaries. The obtained regression coefficients are used to correct sampled parameters in the direction of observed summaries. An improvement was suggested in the form of nonlinear regression using a feed-forward neural network model.<ref name="Blum2010" /> However, it has been shown that the posterior distributions obtained with these approaches are not always consistent with the prior distribution, which did lead to a reformulation of the regression adjustment that respects the prior distribution.<ref name="Leuenberger2009" />
Line 187 ⟶ 189:
===Choice and sufficiency of summary statistics===
Summary statistics may be used to increase the acceptance rate of ABC for high-dimensional data. Low-dimensional sufficient statistics are optimal for this purpose, as they capture all relevant information present in the data in the simplest possible form.<ref name="Csillery" /><ref>{{Cite journal |
One approach to capture most of the information present in data would be to use many statistics, but the accuracy and stability of ABC appears to decrease rapidly with an increasing numbers of summary statistics.<ref name="Beaumont2010" /><ref name="Csillery" /> Instead, a better strategy is to focus on the relevant statistics only—relevancy depending on the whole inference problem, on the model used, and on the data at hand.<ref name="Nunes" />
Line 193 ⟶ 195:
An algorithm has been proposed for identifying a representative subset of summary statistics, by iteratively assessing whether an additional statistic introduces a meaningful modification of the posterior.<ref name="Joyce" /> One of the challenges here is that a large ABC approximation error may heavily influence the conclusions about the usefulness of a statistic at any stage of the procedure. Another method<ref name="Nunes" /> decomposes into two main steps. First, a reference approximation of the posterior is constructed by minimizing the [[Entropy (statistical thermodynamics)|entropy]]. Sets of candidate summaries are then evaluated by comparing the ABC-approximated posteriors with the reference posterior.
With both of these strategies, a subset of statistics is selected from a large set of candidate statistics. Instead, the [[partial least squares regression]] approach uses information from all the candidate statistics, each being weighted appropriately.<ref name="Wegmann" /> Recently, a method for constructing summaries in a semi-automatic manner has attained a considerable interest.<ref name="Fearnhead" /> This method is based on the observation that the optimal choice of summary statistics, when minimizing the quadratic loss of the parameter point estimates, can be obtained through the posterior mean of the parameters, which is approximated by performing a linear regression based on the simulated data. Summary statistics for model selection have been obtained using [[multinomial logistic regression]] on simulated data, treating competing models as the label to predict.<ref name="Prangle" />
Methods for the identification of summary statistics that could also simultaneously assess the influence on the approximation of the posterior would be of substantial value.<ref name="Marjoram" /> This is because the choice of summary statistics and the choice of tolerance constitute two sources of error in the resulting posterior distribution. These errors may corrupt the ranking of models and may also lead to incorrect model predictions
===Bayes factor with ABC and summary statistics===
Line 294 ⟶ 296:
| <ref name="Wegmann2010" />
|-
| [
| Open source software package consisting of several C and R programs that are run with a Perl "front-end". Hierarchical coalescent models. Population genetic data from multiple co-distributed species.
| <ref name="Hickerson07" />
Line 320 ⟶ 322:
| [https://abcpy.readthedocs.io/en/latest/ ABCpy]
| Python package for ABC and other likelihood-free inference schemes. Several state-of-the-art algorithms available. Provides quick way to integrate existing generative (from C++, R etc.), user-friendly parallelization using MPI or Spark and summary statistics learning (with neural network or linear regression).
| <ref>{{cite journal |title=ABCpy: A High-Performance Computing Perspective to Approximate Bayesian Computation |last1=Dutta |first1=R |last2=Schoengens |first2=M |last3=Pacchiardi |first3=L |last4=Ummadisingu |first4=A |last5=Widmer |first5=N |last6=Onnela |first6=J. P. |last7=Mira |first7=A|journal=Journal of Statistical Software |author7-link=Antonietta Mira |year=2021|volume=100 |issue=7 |doi=10.18637/jss.v100.i07 |doi-access=free|arxiv=1711.04694 |s2cid=88516340 }}</ref>
|}
The suitability of individual software packages depends on the specific application at hand, the computer system environment, and the algorithms required.
Line 331 ⟶ 332:
==References==
{{Academic peer reviewed|Q4781761|doi-access=free}}
{{reflist|35em|refs=
Line 337 ⟶ 338:
<ref name="Bharti">{{cite journal | last1 = Bharti | first1 = A | last2 = Briol | first2 = F.-X. | last3 = Pedersen | first3 = T | year = 2021 | title = A General Method for Calibrating Stochastic Radio Channel Models with Kernels | journal = IEEE Transactions on Antennas and Propagation | volume = 70 | issue = 6 | pages = 3986–4001 | doi=10.1109/TAP.2021.3083761| arxiv = 2012.09612 | s2cid = 233880538 }}</ref>
<ref name="Bertorelle">{{cite journal | last1 = Bertorelle | first1 = G | last2 = Benazzo | first2 = A | last3 = Mona | first3 = S | year = 2010 | title = ABC as a flexible framework to estimate demography over space and time: some cons, many pros | journal = Molecular Ecology | volume = 19 | issue = 13| pages = 2609–2625 | doi=10.1111/j.1365-294x.2010.04690.x| pmid = 20561199 | bibcode = 2010MolEc..19.2609B | s2cid = 12129604 | doi-access = free }}</ref>
<ref name="Csillery">{{cite journal | last1 = Csilléry | first1 = K | last2 = Blum | first2 = MGB | last3 = Gaggiotti | first3 = OE | last4 = François | first4 = O | year = 2010 | title = Approximate Bayesian Computation (ABC) in practice | journal = Trends in Ecology & Evolution | volume = 25 | issue = 7| pages = 410–418 | doi=10.1016/j.tree.2010.04.001| pmid = 20488578 | bibcode = 2010TEcoE..25..410C | s2cid = 13957079 }}</ref>
<ref name="Rubin">{{cite journal | last1 = Rubin | first1 = DB | year = 1984 | title = Bayesianly Justifiable and Relevant Frequency Calculations for the Applied Statistician | journal = The Annals of Statistics | volume = 12 | issue = 4| pages = 1151–1172 | doi=10.1214/aos/1176346785| doi-access = free }}</ref>
<ref name="Marjoram">{{cite journal | last1 = Marjoram | first1 = P | last2 = Molitor | first2 = J | last3 = Plagnol | first3 = V | last4 = Tavare | first4 = S | year = 2003 | title = Markov chain Monte Carlo without likelihoods | journal = Proc Natl Acad Sci U S A | volume = 100 | issue = 26| pages = 15324–15328 | doi=10.1073/pnas.0306899100| pmid = 14663152 | pmc = 307566 | bibcode = 2003PNAS..10015324M | doi-access = free }}</ref>
Line 367 ⟶ 368:
<ref name="Templeton2010">{{cite journal | last1 = Templeton | first1 = AR | year = 2010 | title = Coherent and incoherent inference in phylogeography and human evolution | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 107 | issue = 14| pages = 6376–6381 | doi=10.1073/pnas.0910647107| pmid = 20308555 | pmc = 2851988 | bibcode = 2010PNAS..107.6376T| doi-access = free }}</ref>
<!--<ref name="Fagundes">{{cite journal | last1 = Fagundes | first1 = NJR | last2 = Ray | first2 = N | last3 = Beaumont | first3 = M | last4 = Neuenschwander | first4 = S | last5 = Salzano | first5 = FM |display-authors=et al | year = 2007 | title = Statistical evaluation of alternative models of human evolution | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 104 | pages = 17614–17619 | doi=10.1073/pnas.0708280104 | pmid=17978179 | pmc=2077041}}</ref>-->
<!-- <ref name="Gelfand">{{cite journal | last1 = Gelfand | first1 = AE | last2 = Dey | first2 = DK | year = 1994 | title = Bayesian model choice: Asymptotics and exact calculations | journal = J R
<!-- <ref name="Bernardo">Bernardo JM, Smith AFM (1994) Bayesian Theory: John Wiley.</ref> -->
<!-- <ref name="Box">Box G, Draper NR (1987) Empirical Model-Building and Response Surfaces: John Wiley and Sons, Oxford.</ref> -->
Line 379 ⟶ 380:
<ref name="Gerstner">{{cite journal | last1 = Gerstner | first1 = T | last2 = Griebel | first2 = M | year = 2003 | title = Dimension-Adaptive Tensor-Product Quadrature | journal = Computing | volume = 71 | pages = 65–87 | doi=10.1007/s00607-003-0015-5| citeseerx = 10.1.1.16.2434 | s2cid = 16184111 }}</ref>
<ref name="Singer">{{cite journal | last1 = Singer | first1 = AB | last2 = Taylor | first2 = JW | last3 = Barton | first3 = PI | last4 = Green | first4 = WH | year = 2006 | title = Global dynamic optimization for parameter estimation in chemical kinetics | journal = J Phys Chem A | volume = 110 | issue = 3| pages = 971–976 | doi=10.1021/jp0548873| pmid = 16419997 | bibcode = 2006JPCA..110..971S }}</ref>
<ref name="Dean">{{cite arXiv | eprint=1103.5399 | last1=Dean
<ref name="Fearnhead">{{cite arXiv | eprint=1004.1112 | last1=Fearnhead
<ref name="Wilkinson">
<ref name="Nunes">{{cite journal | last1 = Nunes | first1 = MA | last2 = Balding | first2 = DJ | year = 2010 | title = On optimal selection of summary statistics for approximate Bayesian computation | journal = Stat Appl Genet Mol Biol | volume = 9 | page = Article 34 | doi=10.2202/1544-6115.1576| pmid = 20887273 | s2cid = 207319754 }}</ref>
<ref name="Joyce">{{cite journal | last1 = Joyce | first1 = P | last2 = Marjoram | first2 = P | year = 2008 | title = Approximately sufficient statistics and bayesian computation | journal = Stat Appl Genet Mol Biol | volume = 7 | issue = 1| page = Article 26 | doi=10.2202/1544-6115.1389| pmid = 18764775 | s2cid = 38232110 }}</ref>
<ref name="Grelaud">{{cite journal | last1 = Grelaud | first1 = A | last2 = Marin | first2 = J-M | last3 = Robert | first3 = C | last4 = Rodolphe | first4 = F | last5 = Tally | first5 = F | year = 2009 | title = Likelihood-free methods for model choice in Gibbs random fields | journal = Bayesian Analysis | volume = 3 | pages = 427–442 }}</ref>
<ref name="Marin">{{cite arXiv | eprint=1110.4700 | last1=Marin | first1=J. -M
<ref name="Toni">{{cite journal | last1 = Toni | first1 = T | last2 = Welch | first2 = D | last3 = Strelkowa | first3 = N | last4 = Ipsen | first4 = A | last5 = Stumpf | first5 = M | year = 2007 | title = Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems | journal = J R Soc Interface | volume = 6 | issue = 31| pages = 187–202 | pmid = 19205079 | pmc = 2658655 | doi = 10.1098/rsif.2008.0172 }}</ref>
<ref name="Tavare">{{cite journal | last1 = Tavaré | first1 = S | last2 = Balding | first2 = DJ | last3 = Griffiths | first3 = RC | last4 = Donnelly | first4 = P | year = 1997 | title = Inferring Coalescence Times from DNA Sequence Data | journal = Genetics | volume = 145 | issue = 2 | pages = 505–518 | doi = 10.1093/genetics/145.2.505 | pmc = 1207814 | pmid=9071603}}</ref>
<ref name="Toni2010">
.<ref name="Pritchard1999">{{cite journal | last1 = Pritchard | first1 = JK | last2 = Seielstad | first2 = MT | last3 = Perez-Lezaun | first3 = A |display-authors=et al | year = 1999 | title = Population Growth of Human Y Chromosomes: A Study of Y Chromosome Microsatellites | journal = Molecular Biology and Evolution | volume = 16 | issue = 12| pages = 1791–1798 | doi=10.1093/oxfordjournals.molbev.a026091| pmid = 10605120 | doi-access = free }}</ref>
<ref name="Diggle">{{cite journal | last1 = Diggle | first1 = PJ | year = 1984 | title = Monte Carlo Methods of Inference for Implicit Statistical Models | journal = Journal of the Royal Statistical Society, Series B | volume = 46 | issue = 2 | pages = 193–227 | doi = 10.1111/j.2517-6161.1984.tb01290.x }}</ref>
<ref name="Hoel71">{{cite journal | last1 = Hoel | first1 = DG | last2 = Mitchell | first2 = TJ | year = 1971 | title = The simulation, fitting and testing of a stochastic cellular proliferation model | journal = Biometrics | volume = 27 | issue = 1| pages = 191–199 | doi=10.2307/2528937| jstor = 2528937 | pmid = 4926451 }}</ref>
<ref name="Lai">{{cite journal | last1 = Lai | first1 = K | last2 = Robertson | first2 = MJ | last3 = Schaffer | first3 = DV | year = 2004 | title = The sonic hedgehog signaling system as a bistable genetic switch | journal = Biophys. J. | volume = 86 | issue = 5| pages = 2748–2757 | doi=10.1016/s0006-3495(04)74328-3 | pmid = 15111393 | bibcode=2004BpJ....86.2748L | pmc=1304145}}</ref>
<ref name="Bartlett63">{{cite journal | last1 = Bartlett | first1 = MS | year = 1963 | title = The spectral analysis of point processes | journal = Journal of the Royal Statistical Society, Series B | volume = 25 | issue = 2 | pages = 264–296 | doi = 10.1111/j.2517-6161.1963.tb00508.x }}</ref>
<ref name="Blum12">{{cite journal | arxiv=1202.3819 | doi=10.1214/12-STS406 | title=A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation | date=2013 | last1=Blum | first1=M. G. B. | last2=Nunes | first2=M. A. | last3=Prangle | first3=D. | last4=Sisson | first4=S. A. | journal=Statistical Science | volume=28 | issue=2 }}</ref>
<ref name="Fearnhead12">{{cite journal | last1 = Fearnhead | first1 = P | last2 = Prangle | first2 = D | year = 2012 | title = Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation | journal = Journal of the Royal Statistical Society, Series B | volume = 74 | issue = 3| pages = 419–474 | doi=10.1111/j.1467-9868.2011.01010.x| citeseerx = 10.1.1.760.7753 | s2cid = 53861241 }}</ref>
<ref name="Blum10">Blum MGB (2010) Approximate Bayesian Computation: a nonparametric perspective, ''Journal of the American Statistical Association'' (105): 1178-1187</ref>
Line 411 ⟶ 412:
<ref name="Kangas16">{{cite journal |last1= Kangasrääsiö |first1= Antti |last2= Lintusaari |first2= Jarno |last3= Skytén |first3= Kusti |last4= Järvenpää |first4= Marko |last5= Vuollekoski |first5= Henri |last6= Gutmann |first6= Michael |last7= Vehtari |first7= Aki |last8= Corander |first8= Jukka |last9= Kaski |first9= Samuel|year= 2016 |title= ELFI: Engine for Likelihood-Free Inference |url=http://approximateinference.org/accepted/KangasraasioEtAl2016.pdf |journal= NIPS 2016 Workshop on Advances in Approximate Bayesian Inference|bibcode= 2017arXiv170800707L |arxiv= 1708.00707 }}</ref>
<ref name="Klinger2017">Klinger, E.; Rickert, D.; Hasenauer, J. (2017). pyABC: distributed, likelihood-free inference.</ref>
<ref name="Salvatier2016">
<ref name="Prangle">{{cite journal | doi=10.1515/sagmb-2013-0012 | title=Semi-automatic selection of summary statistics for ABC model choice| date=2014 | last1=Prangle | first1=Dennis | last2=Fearnhead | first2=Paul | last3=Cox | first3=Murray P. | last4=Biggs | first4=Patrick J. | last5=French | first5=Nigel P. | journal=Stat Appl Genet Mol Biol | volume=13| issue=1| pages=67–82 | pmid=24323893| arxiv=1302.5624 }}</ref>
}}
|