• Corpus ID: 237346904

An Introduction to Hamiltonian Monte Carlo Method for Sampling

@article{Vishnoi2021AnIT,
  title={An Introduction to Hamiltonian Monte Carlo Method for Sampling},
  author={Nisheeth K. Vishnoi},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.12107}
}
The goal of this article is to introduce the Hamiltonian Monte Carlo method – a Hamiltonian dynamics inspired algorithm for sampling from a Gibbs density π(x) ∝ e−f(x). We focus on the “idealized” case, where one can compute continuous trajectories exactly. We show that idealized HMC preserves π and we establish its convergence when f is strongly convex and smooth. 

Figures from this paper

On the Dissipation of Ideal Hamiltonian Monte Carlo Sampler

We report on what seems to be an intriguing connection between variable integration time and partial velocity refreshment of Ideal Hamiltonian Monte Carlo samplers, both of which can be used for

Hamiltonian Monte Carlo for efficient Gaussian sampling: long and random steps

It is shown that HMC can sample from a distribution that is ε -close in total variation distance using (cid:101) O ( √ κd 1 / 4 log(1 /ε )) gradient queries, where κ is the condition number of Σ.

Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time

This work proposes a scheme of time-varying integration time based on the roots of Chebyshev polynomials for Hamiltonian Monte Carlo (HMC) and shows that in the case of quadratic potential f, ideal HMC with this choice of integration time only takes O ( √ κ log 1 (cid:15) ) number of iterations to reach Wasserstein-2 distance less than (cids:15).

A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling

A deterministic particle method for the weighted porous medium equation is developed and its convergence on bounded time intervals is proved and conditions on the target function and data distribution for which convexity of the energy landscape emerges in the continuum limit are identified.

A blob method method for inhomogeneous diffusion with applications to multi-agent control and sampling

A deterministic particle method for the weighted porous medium equation is developed and its convergence on bounded time intervals is proved and conditions on the target function and data distribution for which convexity of the energy landscape emerges in the continuum limit are identified.

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees and Applications to Differentially Private Optimization

The approach departs from prior works that construct Markov chains on a 1 ε 2 -discretization of K to achieve a sample with ε infinity-distance error, and presents a method to directly convert continuous samples from K with total-variation bounds to samples with in-nity bounds.

Bayesian Inference with Latent Hamiltonian Neural Networks

This work proposes Hamiltonian neural networks (HNNs) with HMC and NUTS for solving Bayesian inference problems, and proposes an HNN extension called latent HNNs (L-Hnns), which is capable of predicting latent variable outputs.

References

SHOWING 1-10 OF 64 REFERENCES

Coupling and convergence for Hamiltonian Monte Carlo

Based on a new coupling approach, we prove that the transition step of the Hamiltonian Monte Carlo algorithm is contractive w.r.t. a carefully designed Kantorovich (L1 Wasserstein) distance. The

On the convergence of Hamiltonian Monte Carlo

This paper discusses the irreducibility and geometric ergodicity of the Hamiltonian Monte Carlo (HMC) algorithm and provides verifiable conditions on $\F$ under which the HMC sampler is geometrically ergodic.

Positive Curvature and Hamiltonian Monte Carlo

It is shown that positive curvature can be used to prove theoretical concentration results for HMC Markov chains and to see that it is positive in cases such as sampling from a high dimensional multivariate Gaussian.

On the geometric ergodicity of Hamiltonian Monte Carlo

We establish general conditions under which Markov chains produced by the Hamiltonian Monte Carlo method will and will not be geometrically ergodic. We consider implementations with both

Stochastic Gradient Hamiltonian Monte Carlo

A variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution is introduced.

Hamiltonian Monte Carlo for Hierarchical Models

This paper explores the use of Hamiltonian Monte Carlo for hierarchical models and demonstrates how the algorithm can overcome those pathologies in practical applications.

Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation

We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo, a general (and practical) method for sampling Gibbs distributions. Our analysis shows that the rate of

Randomized Hamiltonian Monte Carlo

Tuning the durations of the Hamiltonian flow in Hamiltonian Monte Carlo (also called Hybrid Monte Carlo) (HMC) involves a tradeoff between computational cost and sampling quality, which is typically

Optimal scaling of discrete approximations to Langevin diffusions

An asymptotic diffusion limit theorem is proved and it is shown that, as a function of dimension n, the complexity of the algorithm is O(n1/3), which compares favourably with the O- complexity of random walk Metropolis algorithms.
...