• Corpus ID: 246063656

Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm

  title={Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm},
  author={Ye He and Krishnakumar Balasubramanian and Murat A. Erdogdu},
We analyze the oracle complexity of sampling from polynomially decaying heavy-tailed target densities based on running the Unadjusted Langevin Algorithm on certain transformed versions of the target density. The specific class of closed-form transformation maps that we construct are shown to be diffeomorphisms, and are particularly suited for developing efficient diffusion-based samplers. We characterize the precise class of heavy-tailed densities for which polynomial-order oracle complexities… 
2 Citations

Tables from this paper

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo

It is proved that averaged Langevin Monte Carlo outputs a sample with ε -relative Fisher information after O ( L 2 d 2 /ε 2 ) iterations, which constitutes a first step towards the general theory of non-log-concave sampling.

Fisher information lower bounds for sampling

We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information ( FI ) bounds as a notion



Approximation of heavy-tailed distributions via stable-driven SDEs

This paper provides a rigorous theoretical framework for studying the problem of approximating heavy-tailed distributions via ergodic SDEs driven by symmetric (rotationally invariant) $\alpha$-stable processes.

Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm

The upper bound proof introduces a new technique based on a projection characterization of the Metropolis adjustment which reduces the study of MALA to the well-studied discretization analysis of the Langevin SDE and bypasses direct computation of the acceptance probability.

Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials

  • D. Nguyen
  • Mathematics
    Brazilian Journal of Probability and Statistics
  • 2022
The problem of sampling through Euler discretization, where the potential function is assumed to be a mixture of weakly smooth distributions and satisfies weakly dissipative, is studied and convergence guarantees under Poincaré inequality or non-strongly convex outside the ball are proved.

Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev

This work provides the first R ´ enyi divergence convergence guarantees for LMC which allow for weak smoothness and do not require convexity or dissipativity conditions, and introduces techniques for bounding error terms under a certain change of measure, which is a new feature in R´enyi analysis.

Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization

The non-asymptotic behavior of FLMC for non-convex optimization is analyzed and finite-time bounds for its expected suboptimality are proved and the results show that the weak-error ofFLMC increases faster than LMC, which suggests using smaller step-sizes in FLMC.

Nonasymptotic bounds for sampling algorithms without log-concavity

It is revealed that the variance of the randomised drift does not influence the rate of weak convergence of the Euler scheme to the SDE, and non-asymptotic bounds on the distance between the laws induced by Euler schemes and the invariant laws of SDEs are derived.

Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

A convergence guarantee in Kullback-Leibler (KL) divergence is proved assuming $\nu$ satisfies a log-Sobolev inequality and the Hessian of $f$ is bounded.

Analysis of Langevin Monte Carlo via Convex Optimization

It is shown that the Unadjusted Langevin Algorithm can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$ and a non-asymptotic analysis of this method to sample from logconcave smooth target distribution is given.

On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method

This paper describes the stationary distribution of the discrete chain obtained with constant step-size discretization and shows that it is biased away from the target distribution, and establishes the asymptotic normality for numerical integration using the randomized midpoint method.