• Corpus ID: 246240253

The Forward-Backward Envelope for Sampling with the Overdamped Langevin Algorithm

  title={The Forward-Backward Envelope for Sampling with the Overdamped Langevin Algorithm},
  author={Armin Eftekhari and Luisa Fernanda Vargas and Konstantinos C. Zygalakis},
In this paper, we analyse a proximal method based on the idea of forward-backward splitting for sampling from distributions with densities that are not necessarily smooth. In particular, we study the nonasymptotic properties of the Euler-Maruyama discretization of the Langevin equation, where the forward-backward envelope is used to deal with the non-smooth part of the dynamics. An advantage of this envelope, when compared to widely-used Moreu-Yoshida one and the MYULA algorithm, is that it… 

Figures from this paper


Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm
For both constant and decreasing step sizes in the Euler discretization, non-asymptotic bounds for the convergence to the target distribution $\pi$ in total variation distance are obtained.
Sampling from Non-smooth Distributions Through Langevin Diffusion
This paper proposes proximal splitting-type algorithms for sampling from distributions whose densities are not necessarily smooth nor log-concave, and establishes in particular consistency guarantees of these algorithms seen as discretization schemes in this context.
Analysis of Langevin Monte Carlo via Convex Optimization
It is shown that the Unadjusted Langevin Algorithm can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$ and a non-asymptotic analysis of this method to sample from logconcave smooth target distribution is given.
Convergence of Numerical Time-Averaging and Stationary Measures via Poisson Equations
Numerical approximation of the long time behavior of a stochastic differential equation (SDE) is considered. Error estimates for time-averaging estimators are obtained and then used to show that the
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent
Improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and the gradient descent for optimization.
Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo
This paper presents a detailed theoretical analysis of the Langevin Monte Carlo sampling algorithm recently introduced in Durmus et al. (Efficient Bayesian computation by proximal Markov chain Monte
Proximal Markov chain Monte Carlo algorithms
This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability
Nonasymptotic bounds for sampling algorithms without log-concavity
It is revealed that the variance of the randomised drift does not influence the rate of weak convergence of the Euler scheme to the SDE, and non-asymptotic bounds on the distance between the laws induced by Euler schemes and the invariant laws of SDEs are derived.
Computing ergodic limits for Langevin equations