Geometric ergodicity of the Bayesian lasso

@article{Khare2013GeometricEO,
  title={Geometric ergodicity of the Bayesian lasso},
  author={Kshitij Khare and James P. Hobert},
  journal={Electronic Journal of Statistics},
  year={2013},
  volume={7},
  pages={2150-2163}
}
  • K. Khare, J. Hobert
  • Published 2013
  • Mathematics, Computer Science
  • Electronic Journal of Statistics
Consider the standard linear model y = X +✏ , where the components of ✏ are iid standard normal errors. Park and Casella [14] consider a Bayesian treatment of this model with a Laplace/Inverse-Gamma prior on (,). They introduce a Data Augmentation approach that can be used to explore the resulting intractable posterior density, and call it the Bayesian lasso algorithm. In this paper, the Markov chain underlying the Bayesian lasso algorithm is shown to be geometrically ergodic, for arbitrary… 
Fast Markov Chain Monte Carlo for High-Dimensional Bayesian Regression Models With Shrinkage Priors
TLDR
The newly proposed 2BG is the only practical computing solution to do Bayesian shrinkage analysis for datasets with large p, and theoretical justifications for the superior performance of 2BG’s are provided.
Regenerative Simulation for the Bayesian Lasso.
TLDR
It is shown that for the Bayesian Lasso model, the regenerative method is a viable and theoretically justified alternative to the existing ad-hoc MCMC convergence diagnostics.
Geometric Ergodicity of Gibbs Samplers in Bayesian Penalized Regression Models
  • D. Vats
  • Mathematics, Computer Science
  • 2016
TLDR
Geometric ergodicity along with a moment condition results in the existence of a Markov chain central limit theorem for Monte Carlo averages and ensures reliable output analysis.
Adapting to Sparsity and Heavy Tailed Data
TLDR
This thesis proposes a fully Bayesian method called √ DL that achieves scale invariance and robustness to heavy tails while maintaining computational efficiency and provides an efficient Gibbs sampling scheme based on Normal scale mixture representation of Laplace densities.
Approximate Gibbs sampler for Bayesian Huberized lasso
TLDR
A new posterior computation algorithm for the Bayesian Huberized lasso regression is proposed based on the approximation of full conditional distribution and it is possible to estimate a tuning parameter for robustness of the pseudo-Huber loss function.
Scalable Bayesian shrinkage and uncertainty quantification in high-dimensional regression
TLDR
It is proved that the proposed two-step sampler is geometrically ergodic, and derive explicit upper bounds for the (geometric) rate of convergence, and theoretically it is demonstrated that while the original Bayesian lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and hence Hilbert- Schmidt).
Bayesian Penalized Regression
TLDR
A fully Bayesian approach which allows selection of the penalty through posterior inference if desired is introduced and a component-wise Markov chain Monte Carlo algorithm is developed for sampling and conditional and marginal posterior consistency for the Bayesian model are established.
Selection of Tuning Parameters, Solution Paths and Standard Errors for Bayesian Lassos
Penalized regression methods such as the lasso and elastic net (EN) have become popular for simultaneous variable selection and coefficient estimation. Implementation of these methods require
Scalable Approximate MCMC Algorithms for the Horseshoe Prior
TLDR
The empirical results show that the new algorithm yields estimates with lower mean squared error, intervals with better coverage, and elucidates features of the posterior that were often missed by previous algorithms in high dimensions, including bimodality of posterior marginals indicating uncertainty about which covariates belong in the model.
Multivariate Output Analysis for Markov Chain Monte
Markov chain Monte Carlo (MCMC) produces a correlated sample in order to estimate expectations with respect to a target distribution. A fundamental question is when should sampling stop so that we
...
...

References

SHOWING 1-10 OF 21 REFERENCES
Penalized regression, standard errors, and Bayesian lassos
TLDR
The performance of the Bayesian lassos is compared to their fre- quentist counterparts using simulations, data sets that previous lasso papers have used, and a di-cult modeling problem for predicting the collapse of governments around the world.
Inference with normal-gamma prior distributions in regression problems
This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken
Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction
We study the classic problem of choosing a prior distribution for a location parameter β = (β1, . . . , βp) as p grows large. First, we study the standard “global-local shrinkage” approach, based on
The Bayesian Lasso
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.
General state space Markov chains and MCMC algorithms
This paper surveys various results about Markov chains on gen- eral (non-countable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the
Batch means and spectral variance estimators in Markov chain Monte Carlo
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based
Fixed-Width Output Analysis for Markov Chain Monte Carlo
Markov chain Monte Carlo is a method of producing a correlated sample to estimate features of a target distribution through ergodic averages. A fundamental question is when sampling should stop; that
Handling Sparsity via the Horseshoe
TLDR
This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior, which is a member of the family of multivariate scale mixtures of normals and closely related to widely used approaches for sparse Bayesian learning.
Adaptive Sparseness for Supervised Learning
TLDR
A Bayesian approach to supervised learning, which leads to sparse solutions; that is, in which irrelevant parameters are automatically set exactly to zero, and involves no tuning or adjustment of sparseness-controlling hyperparameters.
Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo
Abstract General methods are provided for analyzing the convergence of discrete-time, general state-space Markov chains, such as those used in stochastic simulation algorithms including the Gibbs
...
...