• Publications
  • Influence
Lattice Signatures and Bimodal Gaussians
TLDR
We construct and implement a family of digital signature schemes, named BLISS (Bimodal Lattice Signature Scheme) for security levels of 128, 160, and 192 bits. Expand
Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm
In this paper, we study a method to sample from a target distribution $\pi$ over $\mathbb{R}^d$ having a positive density with respect to the Lebesgue measure, known up to a normalisation factor.Expand
Supplement to "High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm"
TLDR
We study the sampling method based on the Euler-Maruyama discretization of (1). Expand
Efficient Bayesian Computation by Proximal Markov Chain Monte Carlo: When Langevin Meets Moreau
TLDR
This paper presents a new and efficient Markov chain Monte Carlo methodology to perform Bayesian computation for high-dimensional models that are log-concave and nonsmooth, a class of models that is central in imaging sciences. Expand
Analysis of Langevin Monte Carlo via Convex Optimization
TLDR
We show that this method can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$. Expand
Sampling from a strongly log-concave distribution with the Unadjusted Langevin Algorithm
We consider in this paper the problem of sampling a probability distribution π having a density w.r.t. the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation factor $x \mapstoExpand
Hypocoercivity of Piecewise Deterministic Markov Process-Monte Carlo
TLDR
In this work, we establish $\mathrm{L}^2$-exponential convergence for a broad class of Piecewise Deterministic Markov Processes recently proposed in the context of Markov process Monte Carlo methods and covering in particular the Randomized Hamiltonian Monte Carlo, the Zig-Zag process and the Bouncy Particle Sampler. Expand
Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size. While the detailed analysisExpand
The promises and pitfalls of Stochastic Gradient Langevin Dynamics
TLDR
We show that the SGLD algorithm has an invariant probability measure which significantly departs from the target posterior and behaves like as Stochastic Gradient Descent. Expand
Quantitative bounds of convergence for geometrically ergodic Markov chain in the Wasserstein distance with application to the Metropolis Adjusted Langevin Algorithm
TLDR
We establish explicit convergence rates for Markov chains in Wasserstein distance by analyzing Exponential Integrator version of the Metropolis Adjusted Langevin Algorithm. Expand
...
1
2
3
4
5
...