A Shrinkage-Thresholding Metropolis Adjusted Langevin Algorithm for Bayesian Variable Selection

  title={A Shrinkage-Thresholding Metropolis Adjusted Langevin Algorithm for Bayesian Variable Selection},
  author={Amandine Schreck and Gersende Fort and Sylvain Le Corff and {\'E}ric Moulines},
  journal={IEEE Journal of Selected Topics in Signal Processing},
This paper introduces a new Markov Chain Monte Carlo method for Bayesian variable selection in high dimensional settings. The algorithm is a Hastings-Metropolis sampler with a proposal mechanism which combines a Metropolis Adjusted Langevin (MALA) step to propose local moves associated with a shrinkage-thresholding step allowing to propose new models. The geometric ergodicity of this new trans-dimensional Markov Chain Monte Carlo sampler is established. An extensive numerical experiment, on… 

On the Computational Complexity of High-Dimensional Bayesian Variable Selection

This work demonstrates that a Bayesian approach can achieve variable-selection consistency under relatively mild conditions on the design matrix and demonstrates that the statistical criterion of posterior concentration need not imply the computational desideratum of rapid mixing of the MCMC algorithm.

Langevin-based Strategy for Efficient Proposal Adaptation in Population Monte Carlo

  • V. ElviraÉ. Chouzenoux
  • Computer Science
    ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2019
This paper proposes a novel PMC algorithm that combines recent advances in the AIS and the optimization literatures and adapts according to the past weighted samples via a local resampling that preserves the diversity, but also exploits the geometry of the targeted distribution.

An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension

Experimental results indicate that adding the proposed auxiliary variables to the model makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations, and the computational cost of each iteration of the Gibbs sampler is significantly reduced.

Proximal Markov chain Monte Carlo algorithms

This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability

Majorize–Minimize Adapted Metropolis–Hastings Algorithm

This paper derives a novel Metropolis-Hastings proposal, inspired from Langevin dynamics, where the drift term is preconditioned by an adaptive matrix constructed through a Majorization-Minimization strategy, and proposes several variants of low-complexity curvature metrics applicable to large scale problems.

Multilevel Monte Carlo simulation of a diffusion with non-smooth drift

It is shown that Lasso and Bayesian Lasso are very close when the sparsity is large and the noise is small, and a method for calculating the cost of Monte-Carlo (MC), multilevel Monte Carlo (MLMC) and MCMC algorithms is proposed.


This work proposes an approximation scheme for exact-sparsity inducing prior distributions based on the forward-backward envelope of Patrinos et al. (2014), and illustrates the method with a high-dimensional linear regression model.

Oscillation of adaptative Metropolis-Hasting and simulated annealing algorithms around penalized least squares estimator

In this work we study, as the temperature goes to zero, the oscillation of Metropolis-Hasting's algorithm around the Basis Pursuit De-noising solutions. We derive new criteria for choosing the

A Moreau-Yosida approximation scheme for a class of high-dimensional posterior distributions

This work proposes a methodology to derive smooth approximations of exact-sparsity inducing prior distributions that are, in some cases, easier to handle by standard MCMC methods.

An Auxiliary Variable Method for MCMC Algorithms in High Dimension

This paper proposes to add auxiliary variables to the model in order to dissociate the two sources of dependencies in inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model.



An Adaptive Version for the Metropolis Adjusted Langevin Algorithm with a Truncated Drift

This paper extends some adaptive schemes that have been developed for the Random Walk Metropolis algorithm to more general versions of the Metropolis-Hastings (MH) algorithm, particularly to the

Reversible jump Markov chain Monte Carlo computation and Bayesian model determination

Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some fixed

Adaptive Markov Chain Monte Carlo for Bayesian Variable Selection

An adaptive Markov chain Monte Carlo scheme that automatically tunes the parameters of a family of mixture proposal distributions during simulation that adapts to sample efficiently from multimodal target distributions is introduced.

Adaptive sampling for Bayesian variable selection

Our paper proposes adaptive Monte Carlo sampling schemes for Bayesian variable selection in linear regression that improve on standard Markov chain methods. We do so by considering

Bayesian Model Choice Via Markov Chain Monte Carlo Methods

This paper presents a framework for Bayesian model choice, along with an MCMC algorithm that does not suffer from convergence difficulties, and applies equally well to problems where only one model is contemplated but its proper size is not known at the outset.

Annealed Importance Sampling Reversible Jump MCMC Algorithms

We develop a methodology to efficiently implement the reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms of Green, applicable for example to model selection inference in a Bayesian

Proximal Markov chain Monte Carlo algorithms

This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability

Sequential Monte Carlo on large binary sampling spaces

This paper presents a parametric family for adaptive sampling on high dimensional binary spaces and provides a review of models for binary data and makes one of them work in the context of Sequential Monte Carlo sampling.

Bayesian Variable Selection via Particle Stochastic Search.


The optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well, and the optimal efficiency obtainable is independent of the dimensionality of the update rule.