Corpus ID: 88516884

Quasi Markov Chain Monte Carlo Methods

@article{Schwedes2018QuasiMC,
  title={Quasi Markov Chain Monte Carlo Methods},
  author={Tobias Schwedes and Ben Calderhead},
  journal={arXiv: Statistics Theory},
  year={2018}
}
Quasi-Monte Carlo (QMC) methods for estimating integrals are attractive since the resulting estimators typically converge at a faster rate than pseudo-random Monte Carlo. However, they can be difficult to set up on arbitrary posterior densities within the Bayesian framework, in particular for inverse problems. We introduce a general parallel Markov chain Monte Carlo (MCMC) framework, for which we prove a law of large numbers and a central limit theorem. In that context, non-reversible… Expand
Markov chain Monte Carlo importance samplers for Bayesian models with intractable likelihoods
TLDR
Convergence of the estimator is verified to hold under regularity assumptions which do not require that the diffusion can be simulated exactly, and an adaptive MCMC approach to deal with the selection of a suitably large tolerance is suggested. Expand
Conditional sequential Monte Carlo in high dimensions
The iterated conditional sequential Monte Carlo (i-CSMC) algorithm from Andrieu, Doucet and Holenstein (2010) is an MCMC approach for efficiently sampling from the joint posterior distribution of theExpand
MetFlow: A New Efficient Method for Bridging the Gap between Markov Chain Monte Carlo and Variational Inference
TLDR
A new computationally efficient method to combine Variational Inference (VI) with Markov Chain Monte Carlo (MCMC) is proposed, which is amenable to the reparametrization trick and does not rely on computationally expensive reverse kernels. Expand

References

SHOWING 1-10 OF 101 REFERENCES
A Randomized Quasi-Monte Carlo Simulation Method for Markov Chains
TLDR
A randomized quasi-Monte Carlo method for the simulation of Markov chains up to a random (and possibly unbounded) stopping time that proves bounds on the convergence rate of the worst-case error and of the variance for special situations where the state space of the chain is a subset of the real numbers. Expand
A general construction for parallelizing Metropolis−Hastings algorithms
  • B. Calderhead
  • Computer Science, Medicine
  • Proceedings of the National Academy of Sciences
  • 2014
TLDR
This paper proposes a natural generalization of the Metropolis−Hastings algorithm that allows for parallelizing a single chain using existing MCMC methods, and shows how it allows for a principled way of using every integration step within Hamiltonian Monte Carlo methods. Expand
Consistency of Markov chain quasi-Monte Carlo on continuous state spaces
The random numbers driving Markov chain Monte Carlo (MCMC) simulation are usually modeled as independent U(0, 1) random variables. Tribble [Markov chain Monte Carlo algorithms using completelyExpand
Monte Carlo Sampling Methods Using Markov Chains and Their Applications
SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods andExpand
Riemann manifold Langevin and Hamiltonian Monte Carlo methods
The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms whenExpand
Multidimensional variation for quasi-Monte Carlo
This paper collects together some properties of multidimensional definitions of the total variation of a real valued function. The subject has been studied for a long time. Many of the resultsExpand
Randomized Quasi-Monte Carlo Simulation of Markov Chains with an Ordered State Space
We study a randomized quasi-Monte Carlo method for estimating the state distribution at each step of a Markov chain with totally ordered (discrete or continuous) state space. The number of steps inExpand
New Inputs and Methods for Markov Chain Quasi-Monte Carlo
TLDR
Some new constructions of points, fully equidistributed LFSRs, are presented, which are small enough that the entire point set can be used in a Monte Carlo calculation. Expand
When Are Quasi-Monte Carlo Algorithms Efficient for High Dimensional Integrals?
TLDR
It is proved that the minimalworst case error of quasi-Monte Carlo algorithms does not depend on the dimensiondiff the sum of the weights is finite, and the minimal number of function values in the worst case setting needed to reduce the initial error by ? is bounded byC??p, where the exponentp? 1, 2], andCdepends exponentially on thesum of weights. Expand
Pseudorandom numbers for modelling Markov chains
Abstract THE calculation of space averages in problems of statistical physics can be reduced to the Monte Carlo simulation of the mathematical expectations of the corresponding quantities based onExpand
...
1
2
3
4
5
...