Geometric ergodicity of the Bayesian lasso

@article{Khare2013GeometricEO,
  title={Geometric ergodicity of the Bayesian lasso},
  author={Kshitij Khare and James P. Hobert},
  journal={Electronic Journal of Statistics},
  year={2013},
  volume={7},
  pages={2150-2163}
}
  • K. Khare, J. Hobert
  • Published 2013
  • Mathematics, Computer Science
  • Electronic Journal of Statistics
Consider the standard linear model y = X +✏ , where the components of ✏ are iid standard normal errors. Park and Casella [14] consider a Bayesian treatment of this model with a Laplace/Inverse-Gamma prior on (,). They introduce a Data Augmentation approach that can be used to explore the resulting intractable posterior density, and call it the Bayesian lasso algorithm. In this paper, the Markov chain underlying the Bayesian lasso algorithm is shown to be geometrically ergodic, for arbitrary… 
Fast Bayesian Lasso for High-Dimensional Regression
TLDR
A theoretical underpinning to the new method is provided by proving rigorously that the fast Bayesian lasso is geometrically ergodic, and it is demonstrated numerically that this blocked sampler exhibits vastly superior convergence behavior in high-dimensional regimes.
Geometric ergodicity of Gibbs samplers for the Horseshoe and its regularized variants
The Horseshoe is a widely used and popular continuous shrinkage prior for highdimensional Bayesian linear regression. Recently, regularized versions of the Horseshoe prior have also been introduced
Fast Markov Chain Monte Carlo for High-Dimensional Bayesian Regression Models With Shrinkage Priors
TLDR
The newly proposed 2BG is the only practical computing solution to do Bayesian shrinkage analysis for datasets with large p, and theoretical justifications for the superior performance of 2BG’s are provided.
Regenerative Simulation for the Bayesian Lasso.
TLDR
It is shown that for the Bayesian Lasso model, the regenerative method is a viable and theoretically justified alternative to the existing ad-hoc MCMC convergence diagnostics.
Scalable MCMC for Bayes Shrinkage Priors
TLDR
This paper proposes an MCMC algorithm for computation in high-dimensional models that combines blocked Gibbs, Metropolis-Hastings, and slice sampling, and shows the scalability of the algorithm in simulations with up to 20,000 predictors.
Geometric Ergodicity of Gibbs Samplers in Bayesian Penalized Regression Models
  • D. Vats
  • Mathematics, Computer Science
  • 2016
TLDR
Geometric ergodicity along with a moment condition results in the existence of a Markov chain central limit theorem for Monte Carlo averages and ensures reliable output analysis.
Adapting to Sparsity and Heavy Tailed Data
TLDR
This thesis proposes a fully Bayesian method called √ DL that achieves scale invariance and robustness to heavy tails while maintaining computational efficiency and provides an efficient Gibbs sampling scheme based on Normal scale mixture representation of Laplace densities.
Coupling‐based convergence assessment of some Gibbs samplers for high‐dimensional Bayesian regression with shrinkage priors
TLDR
Coupling techniques tailored to the setting of high-dimensional regression with shrinkage priors are developed, which enable practical, non-asymptotic diagnostics of convergence without relying on traceplots or long-run asymptotics.
Approximate Gibbs sampler for Bayesian Huberized lasso
TLDR
A new posterior computation algorithm for the Bayesian Huberized lasso regression is proposed based on the approximation of full conditional distribution and it is possible to estimate a tuning parameter for robustness of the pseudo-Huber loss function.
Scalable Bayesian shrinkage and uncertainty quantification in high-dimensional regression
TLDR
It is proved that the proposed two-step sampler is geometrically ergodic, and derive explicit upper bounds for the (geometric) rate of convergence, and theoretically it is demonstrated that while the original Bayesian lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and hence Hilbert- Schmidt).
...
...

References

SHOWING 1-10 OF 21 REFERENCES
Penalized regression, standard errors, and Bayesian lassos
TLDR
The performance of the Bayesian lassos is compared to their fre- quentist counterparts using simulations, data sets that previous lasso papers have used, and a di-cult modeling problem for predicting the collapse of governments around the world.
Inference with normal-gamma prior distributions in regression problems
This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken
Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction
We study the classic problem of choosing a prior distribution for a location parameter β = (β1, . . . , βp) as p grows large. First, we study the standard “global-local shrinkage” approach, based on
The Bayesian Lasso
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.
General state space Markov chains and MCMC algorithms
This paper surveys various results about Markov chains on gen- eral (non-countable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the
The horseshoe estimator for sparse signals
This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator's advantages over
Batch means and spectral variance estimators in Markov chain Monte Carlo
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based
Fixed-Width Output Analysis for Markov Chain Monte Carlo
Markov chain Monte Carlo is a method of producing a correlated sample to estimate features of a target distribution through ergodic averages. A fundamental question is when sampling should stop; that
Handling Sparsity via the Horseshoe
TLDR
This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior, which is a member of the family of multivariate scale mixtures of normals and closely related to widely used approaches for sparse Bayesian learning.
Adaptive Sparseness for Supervised Learning
TLDR
A Bayesian approach to supervised learning, which leads to sparse solutions; that is, in which irrelevant parameters are automatically set exactly to zero, and involves no tuning or adjustment of sparseness-controlling hyperparameters.
...
...