• Corpus ID: 246063656

# Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm

@inproceedings{He2022HeavytailedSV,
title={Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm},
author={Ye He and Krishnakumar Balasubramanian and Murat A. Erdogdu},
year={2022}
}
• Published 20 January 2022
• Mathematics
We analyze the oracle complexity of sampling from polynomially decaying heavy-tailed target densities based on running the Unadjusted Langevin Algorithm on certain transformed versions of the target density. The specific class of closed-form transformation maps that we construct are shown to be diffeomorphisms, and are particularly suited for developing efficient diffusion-based samplers. We characterize the precise class of heavy-tailed densities for which polynomial-order oracle complexities…
2 Citations

## Tables from this paper

• Computer Science, Mathematics
COLT
• 2022
It is proved that averaged Langevin Monte Carlo outputs a sample with ε -relative Fisher information after O ( L 2 d 2 /ε 2 ) iterations, which constitutes a first step towards the general theory of non-log-concave sampling.
• Computer Science
ArXiv
• 2022
We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information ( FI ) bounds as a notion

## References

SHOWING 1-10 OF 59 REFERENCES

• Mathematics, Computer Science
• 2020
This paper provides a rigorous theoretical framework for studying the problem of approximating heavy-tailed distributions via ergodic SDEs driven by symmetric (rotationally invariant) $\alpha$-stable processes.
• Computer Science
COLT
• 2021
The upper bound proof introduces a new technique based on a projection characterization of the Metropolis adjustment which reduces the study of MALA to the well-studied discretization analysis of the Langevin SDE and bypasses direct computation of the acceptance probability.
• D. Nguyen
• Mathematics
Brazilian Journal of Probability and Statistics
• 2022
The problem of sampling through Euler discretization, where the potential function is assumed to be a mixture of weakly smooth distributions and satisﬁes weakly dissipative, is studied and convergence guarantees under Poincaré inequality or non-strongly convex outside the ball are proved.
• Computer Science, Mathematics
COLT
• 2022
This work provides the first R ´ enyi divergence convergence guarantees for LMC which allow for weak smoothness and do not require convexity or dissipativity conditions, and introduces techniques for bounding error terms under a certain change of measure, which is a new feature in R´enyi analysis.
• Computer Science
ICML
• 2019
The non-asymptotic behavior of FLMC for non-convex optimization is analyzed and finite-time bounds for its expected suboptimality are proved and the results show that the weak-error ofFLMC increases faster than LMC, which suggests using smaller step-sizes in FLMC.
• Computer Science, Mathematics
The Annals of Applied Probability
• 2020
It is revealed that the variance of the randomised drift does not influence the rate of weak convergence of the Euler scheme to the SDE, and non-asymptotic bounds on the distance between the laws induced by Euler schemes and the invariant laws of SDEs are derived.
• Mathematics, Computer Science
NeurIPS
• 2019
A convergence guarantee in Kullback-Leibler (KL) divergence is proved assuming $\nu$ satisfies a log-Sobolev inequality and the Hessian of $f$ is bounded.
• Computer Science
J. Mach. Learn. Res.
• 2019
It is shown that the Unadjusted Langevin Algorithm can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$ and a non-asymptotic analysis of this method to sample from logconcave smooth target distribution is given.
• Computer Science, Mathematics
NeurIPS
• 2020
This paper describes the stationary distribution of the discrete chain obtained with constant step-size discretization and shows that it is biased away from the target distribution, and establishes the asymptotic normality for numerical integration using the randomized midpoint method.