• Corpus ID: 203610262

An Efficient Sampling Algorithm for Non-smooth Composite Potentials

@article{Mou2019AnES,
  title={An Efficient Sampling Algorithm for Non-smooth Composite Potentials},
  author={Wenlong Mou and Nicolas Flammarion and Martin J. Wainwright and Peter L. Bartlett},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.00551}
}
We consider the problem of sampling from a density of the form $p(x) \propto \exp(-f(x)- g(x))$, where $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth and strongly convex function and $g: \mathbb{R}^d \rightarrow \mathbb{R}$ is a convex and Lipschitz function. We propose a new algorithm based on the Metropolis-Hastings framework, and prove that it mixes to within TV distance $\varepsilon$ of the target density in at most $O(d \log (d/\varepsilon))$ iterations. This guarantee extends… 

Composite Logconcave Sampling with a Restricted Gaussian Oracle

The algorithm vastly improves upon the hit-and-run algorithm for sampling the restriction of a (non-diagonal) Gaussian to the positive orthant and obtains stronger provable guarantees and greater generality than existing methods for composite sampling.

Structured Logconcave Sampling with a Restricted Gaussian Oracle

A reduction framework is developed, inspired by proximal point methods in convex optimization, which bootstraps samplers for regularized densities to improve dependences on problem conditioning and gives algorithms for sampling several structured logconcave families to high accuracy.

A Proximal Algorithm for Sampling from Non-smooth Potentials

A fast algorithm is proposed that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant and establishes a polynomial-time complexity Õ(dε−1) to obtain ε total variation distance to the target density, better than all existing results under the same assumptions.

A Proximal Algorithm for Sampling from Non-convex Potentials

This work develops a sampling algorithm that resembles proximal algorithms in optimization for this challenging sampling task and achieves better complexity than all existing methods.

A Proximal Algorithm for Sampling

A novel technique is developed to bound the complexity of this rejection sampling scheme in spite of the non-smoothness in the potentials, and the algorithm achieves state-of-the-art complexity bounds compared with all existing methods in the same settings.

On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness

It is shown that the same rate is achievable for a wider class of potentials that are degenerately convex at infinity, and any order of smoothness ${\beta\in(0,1]}$; consequently, the results are applicable to a wide class of non-convex potentialS that are weakly smooth and exhibit at least linear tail growth.

Proximal Langevin Algorithm: Rapid Convergence Under Isoperimetry

A convergence guarantee for PLA in Kullback-Leibler (KL) divergence when $\nu$ satisfies log-Sobolev inequality (LSI) and $f$ has bounded second and third derivatives is proved.

Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets

An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established and a new nonasymptotic guarantee of convergence of the penalized gradient flow for the optimization problem is inferred.

When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint

By viewing Langevin algorithm as composite optimization, a new analysis technique is developed that leads to dimension independent convergence rates for either Lipschitz or smooth convex functions with normal priors.

Truncated Log-concave Sampling with Reflective Hamiltonian Monte Carlo

We introduce Reflective Hamiltonian Monte Carlo (ReHMC), an HMC-based algorithm, to sample from a log-concave distribution restricted to a convex polytope. We prove that, starting from a warm start,

References

SHOWING 1-10 OF 88 REFERENCES

Structured Logconcave Sampling with a Restricted Gaussian Oracle

A reduction framework is developed, inspired by proximal point methods in convex optimization, which bootstraps samplers for regularized densities to improve dependences on problem conditioning and gives algorithms for sampling several structured logconcave families to high accuracy.

Convergence of Langevin MCMC in KL-divergence

By considering the Langevin diffusion as a gradient flow in the space of probability distributions, an elegant analysis is obtained that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.

Dimensionally Tight Bounds for Second-Order Hamiltonian Monte Carlo

This work shows that the conjecture that Hamiltonian Monte Carlo can be run in gradient evaluations when sampling from strongly log-concave target distributions that satisfy a weak third-order regularity property associated with the input data, and suggests that leapfrog HMC performs better than its competitors when this condition is satisfied.

The geometry of logconcave functions and sampling algorithms

These results are applied to analyze two efficient algorithms for sampling from a logconcave distribution in n dimensions, with no assumptions on the local smoothness of the density function.

Algorithmic Theory of ODEs and Sampling from Well-conditioned Logconcave Densities

This paper gives an improved contraction bound for the exact HMC process and logarithmic bounds on the degree of polynomials that approximate solutions of the differential equations arising in implementing HMC to obtain a nearly linear implementation of HMC for a broad class of smooth, strongly logconcave densities.

Proximal Markov chain Monte Carlo algorithms

This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability

Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm

For both constant and decreasing step sizes in the Euler discretization, non-asymptotic bounds for the convergence to the target distribution $\pi$ in total variation distance are obtained.

A Hamiltonian Monte Carlo Method for Non-Smooth Energy Sampling

This paper addresses the problem of using Hamiltonian dynamics to sample from probability distributions having non-differentiable energy functions such as those based on the l1 norm and shows its ability to accurately sample according to various multivariate target distributions.

Adaptive Simulated Annealing: A Near-optimal Connection between Sampling and Counting

It is proved any non-adaptive cooling schedule has length at least O*(ln A), and an algorithm to find an adaptive schedule of length O* (radicln A) and a nearly matching lower bound is presented.

Rates of convergence of the Hastings and Metropolis algorithms

Recent results in Markov chain theory are applied to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and it is shown geometric convergence essentially occurs if and only if $pi$ has geometric tails.
...