Hybrid Monte Carlo on Hilbert spaces

@article{Beskos2011HybridMC,
  title={Hybrid Monte Carlo on Hilbert spaces},
  author={Alexandros Beskos and F. J. Pinski and Jes{\'u}s Mar{\'i}a Sanz-Serna and Andrew M. Stuart},
  journal={Stochastic Processes and their Applications},
  year={2011},
  volume={121},
  pages={2201-2230}
}

Figures from this paper

A Novel Hybrid Monte Carlo Algorithm for Sampling Path Space

TLDR
The ingredients of the new algorithm include the definition of the mass operator, the equations for the Hamiltonian flow, the (approximate) numerical integration of the evolution equations, and finally, the Metropolis–Hastings acceptance rule, which constitute a robust method for sampling the target distribution in an almost dimension-free manner.

A Function Space HMC Algorithm With Second Order Langevin Diffusion Limit

TLDR
The main result of this paper states that the new algorithm, appropriately rescaled, converges weakly to a second order Langevin diffusion on Hilbert space; as a consequence the algorithm explores the approximate target measures on R^N in a number of steps which is independent of N.

Mixing rates for Hamiltonian Monte Carlo algorithms in finite and infinite dimensions

We establish the geometric ergodicity of the preconditioned Hamiltonian Monte Carlo (HMC) algorithm defined on an infinite-dimensional Hilbert space, as developed in [Beskos et al., Stochastic

Optimal tuning of the hybrid Monte Carlo algorithm

TLDR
It is proved that, to obtain an O(1) acceptance probability as the dimension d of the state space tends to, the leapfrog step-size h should be scaled as h=l ×d−1/ 4, which means that in high dimensions, HMC requires O(d1/ 4 ) steps to traverse the statespace.

Advanced Markov Chain Monte Carlo methods for sampling on diffusion pathspace

The need to calibrate increasingly complex statistical models requires a persistent effort for further advances on available, computationally intensive MonteCarlo methods. We study here an advanced

Optimal Scaling and Diffusion Limits for the Langevin Algorithm in High Dimensions

TLDR
It is proved that, started in stationarity, a suitably interpolated and scaled version of the Markov chain corresponding to MALA converges to an infinite dimensional diffusion process.

Advanced MCMC methods for sampling on diffusion pathspace

Spectral gaps for a Metropolis–Hastings algorithm in infinite dimensions

We study the problem of sampling high and infinite dimensional target measures arising in applications such as conditioned diffusions and inverse problems. We focus on those that arise from

Entropy-based adaptive Hamiltonian Monte Carlo

TLDR
A gradient-based algorithm is developed that allows for the adaptation of the mass matrix of Hamiltonian Monte Carlo by encouraging the leapfrog integrator to have high acceptance rates while also exploring all dimensions jointly.

Quantum-Inspired Hamiltonian Monte Carlo for Bayesian Sampling

TLDR
The proposed Quantum-Inspired Hamiltonian Monte Carlo algorithm (QHMC) allows a particle to have a random mass matrix with a probability distribution rather than a fixed mass, and proves the convergence property of QHMC and shows why such a randommass can improve the performance when the authors sample a broad class of distributions.
...

References

SHOWING 1-10 OF 39 REFERENCES

Optimal tuning of the hybrid Monte Carlo algorithm

TLDR
It is proved that, to obtain an O(1) acceptance probability as the dimension d of the state space tends to, the leapfrog step-size h should be scaled as h=l ×d−1/ 4, which means that in high dimensions, HMC requires O(d1/ 4 ) steps to traverse the statespace.

Riemann Manifold Langevin and Hamiltonian Monte Carlo

This paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when

Riemann manifold Langevin and Hamiltonian Monte Carlo methods

TLDR
The methodology proposed automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density, and substantial improvements in the time‐normalized effective sample size are reported when compared with alternative sampling approaches.

MCMC Using Hamiltonian Dynamics

Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of

MCMC methods for diffusion bridges

TLDR
The novel algorithmic idea of the paper is that proposed moves for the MCMC algorithm are determined by discretising the SPDEs in the time direction using an implicit scheme, parametrised by θ ∈ [0,1].

Optimal scalings for local Metropolis--Hastings chains on nonproduct targets in high dimensions

We investigate local MCMC algorithms, namely the random-walk Metropolis and the Langevin algorithms, and identify the optimal choice of the local step-size as a function of the dimension $n$ of the

Weak convergence and optimal scaling of random walk Metropolis algorithms

This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm in order to maximize the efficiency of the algorithm. The main result is a

Handbook of Markov Chain Monte Carlo

TLDR
A Markov chain Monte Carlo based analysis of a multilevel model for functional MRI data and its applications in environmental epidemiology, educational research, and fisheries science are studied.

Optimal scaling of discrete approximations to Langevin diffusions

TLDR
An asymptotic diffusion limit theorem is proved and it is shown that, as a function of dimension n, the complexity of the algorithm is O(n1/3), which compares favourably with the O- complexity of random walk Metropolis algorithms.