Adaptive approximate Bayesian computation

@article{Beaumont2008AdaptiveAB,
  title={Adaptive approximate Bayesian computation},
  author={Mark A. Beaumont and Jean-Marie Cornuet and Jean-Michel Marin and Christian P. Robert},
  journal={Biometrika},
  year={2008},
  volume={96},
  pages={983-990}
}
Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in Sisson et al.'s (2007) partial rejection control version. While this method is based upon the theoretical works of Del Moral et al. (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo… 

Figures from this paper

Sequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation

An ABC SMC method that uses data-based adaptive weights can very substantially improve acceptance rates, as is demonstrated in a series of examples with simulated and real data sets, including a currently topical example from dynamic modelling in systems biology applications.

Filtering via approximate Bayesian computation

This article presents an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable and uses a sequential Monte Carlo algorithm to both fit and sample from the ABC approximation of the target probability density.

Convergence of regression‐adjusted approximate Bayesian computation

&NA; We present asymptotic results for the regression‐adjusted version of approximate Bayesian computation introduced by Beaumont et al. (2002). We show that for an appropriate choice of the

Improved Convergence of Regression Adjusted Approximate Bayesian Computation

It is shown that for an appropriate choice of the bandwidth in ABC, using regression-adjustment will lead to an ABC posterior that, asymptotically, correctly quantifies uncertainty.

Adaptive Approximate Bayesian Computation Tolerance Selection

A method for adaptively selecting a sequence of tolerances that improves the computational efficiency of the algorithm over other common techniques is proposed and a stopping rule is defined as a by-product of the adaptation procedure, which assists in automating termination of sampling.

Approximate Bayesian Computation: A Survey on Recent Results

This survey of ABC methods focuses on the recent literature, and gives emphasis to the importance of model choice in the applications of ABC, and the associated difficulties in its implementation.

Approximate Bayesian computational methods

In this survey, the various improvements and extensions brought on the original ABC algorithm in recent years are studied.

Efficient learning in ABC algorithms

A sequential algorithm adapted from Del Moral et al. (2012) which runs twice as fast as traditional ABC algorithms and is calibrated to minimize the number of simulations from the model.

Research on Approximate Bayesian Computation

The central theme of the approach is to enhance the current ABC algorithms by exploiting the structure of the mathematical models via derivative information by introducing Progressive Correction of Gaussian Components (PCGC) as a computationally efficient algorithm for generating proposal distributions in the ABC sampler.

Non-linear regression models for Approximate Bayesian Computation

A machine-learning approach to the estimation of the posterior density by introducing two innovations that fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling.
...

References

SHOWING 1-10 OF 16 REFERENCES

Approximate Bayesian computation in population genetics.

A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty.

Convergence of adaptive mixtures of importance sampling schemes

This work derives sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and shows that Rao--Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.

Convergence of Adaptive Sampling Schemes

This work derives sucient convergence conditions for a wide class of population Monte Carlo algorithms and shows that Rao‐ Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions simply do not benefit from repeated updating.

Population Monte Carlo

The population Monte Carlo principle is described, which consists of iterated generations of importance samples, with importance functions depending on the previously generated importance sample, which can be iterated like MCMC algorithms, while being more robust against dependence and starting values, as shown in this paper.

Sequential Monte Carlo without likelihoods

This work proposes a sequential Monte Carlo sampler that convincingly overcomes inefficiencies of existing methods and demonstrates its implementation through an epidemiological study of the transmission rate of tuberculosis.

Inferring population history with DIY ABC: a user-friendly approach to approximate Bayesian computation

Key methods used in DIY ABC, a computer program for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples, are described.

Markov chain Monte Carlo without likelihoods

A Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods is presented, which can be used in frequentist applications, in particular for maximum-likelihood estimation.

Adaptive importance sampling in general mixture classes

An adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion is proposed.

Adaptive Importance Sampling

Parametric adaptive importance sampling algorithms that adapt the IS density to the system of interest during the course of the simulation are discussed and are shown to converge to the optimum improved importance sampling density for the special case of a linear system with Gaussian noise.

Monte Carlo Strategies in Scientific Computing

The strength of this book is in bringing together advanced Monte Carlo methods developed in many disciplines, including the Ising model, molecular structure simulation, bioinformatics, target tracking, hypothesis testing for astronomical observations, Bayesian inference of multilevel models, missing-data problems.