Geometric ergodicity of the Bayesian lasso
@article{Khare2013GeometricEO, title={Geometric ergodicity of the Bayesian lasso}, author={Kshitij Khare and James P. Hobert}, journal={Electronic Journal of Statistics}, year={2013}, volume={7}, pages={2150-2163} }
Consider the standard linear model y = X +✏ , where the components of ✏ are iid standard normal errors. Park and Casella [14] consider a Bayesian treatment of this model with a Laplace/Inverse-Gamma prior on (,). They introduce a Data Augmentation approach that can be used to explore the resulting intractable posterior density, and call it the Bayesian lasso algorithm. In this paper, the Markov chain underlying the Bayesian lasso algorithm is shown to be geometrically ergodic, for arbitrary…
39 Citations
Fast Bayesian Lasso for High-Dimensional Regression
- Computer Science
- 2015
A theoretical underpinning to the new method is provided by proving rigorously that the fast Bayesian lasso is geometrically ergodic, and it is demonstrated numerically that this blocked sampler exhibits vastly superior convergence behavior in high-dimensional regimes.
Geometric ergodicity of Gibbs samplers for the Horseshoe and its regularized variants
- Mathematics
- 2021
The Horseshoe is a widely used and popular continuous shrinkage prior for highdimensional Bayesian linear regression. Recently, regularized versions of the Horseshoe prior have also been introduced…
Fast Markov Chain Monte Carlo for High-Dimensional Bayesian Regression Models With Shrinkage Priors
- Computer Science, MathematicsJ. Comput. Graph. Stat.
- 2021
The newly proposed 2BG is the only practical computing solution to do Bayesian shrinkage analysis for datasets with large p, and theoretical justifications for the superior performance of 2BG’s are provided.
Regenerative Simulation for the Bayesian Lasso.
- Computer Science
- 2018
It is shown that for the Bayesian Lasso model, the regenerative method is a viable and theoretically justified alternative to the existing ad-hoc MCMC convergence diagnostics.
Scalable MCMC for Bayes Shrinkage Priors
- Computer Science
- 2017
This paper proposes an MCMC algorithm for computation in high-dimensional models that combines blocked Gibbs, Metropolis-Hastings, and slice sampling, and shows the scalability of the algorithm in simulations with up to 20,000 predictors.
Geometric Ergodicity of Gibbs Samplers in Bayesian Penalized Regression Models
- Mathematics, Computer Science
- 2016
Geometric ergodicity along with a moment condition results in the existence of a Markov chain central limit theorem for Monte Carlo averages and ensures reliable output analysis.
Adapting to Sparsity and Heavy Tailed Data
- Computer Science
- 2018
This thesis proposes a fully Bayesian method called √ DL that achieves scale invariance and robustness to heavy tails while maintaining computational efficiency and provides an efficient Gibbs sampling scheme based on Normal scale mixture representation of Laplace densities.
Coupling‐based convergence assessment of some Gibbs samplers for high‐dimensional Bayesian regression with shrinkage priors
- Mathematics, Computer ScienceJournal of the Royal Statistical Society: Series B (Statistical Methodology)
- 2022
Coupling techniques tailored to the setting of high-dimensional regression with shrinkage priors are developed, which enable practical, non-asymptotic diagnostics of convergence without relying on traceplots or long-run asymptotics.
Approximate Gibbs sampler for Bayesian Huberized lasso
- Computer Science
- 2022
A new posterior computation algorithm for the Bayesian Huberized lasso regression is proposed based on the approximation of full conditional distribution and it is possible to estimate a tuning parameter for robustness of the pseudo-Huber loss function.
Scalable Bayesian shrinkage and uncertainty quantification in high-dimensional regression
- Computer Science
- 2017
It is proved that the proposed two-step sampler is geometrically ergodic, and derive explicit upper bounds for the (geometric) rate of convergence, and theoretically it is demonstrated that while the original Bayesian lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and hence Hilbert- Schmidt).
References
SHOWING 1-10 OF 21 REFERENCES
Penalized regression, standard errors, and Bayesian lassos
- Computer Science
- 2010
The performance of the Bayesian lassos is compared to their fre- quentist counterparts using simulations, data sets that previous lasso papers have used, and a di-cult modeling problem for predicting the collapse of governments around the world.
Inference with normal-gamma prior distributions in regression problems
- Mathematics
- 2010
This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken…
Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction
- Mathematics
- 2012
We study the classic problem of choosing a prior distribution for a location parameter β = (β1, . . . , βp) as p grows large. First, we study the standard “global-local shrinkage” approach, based on…
The Bayesian Lasso
- Computer Science, Mathematics
- 2008
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.…
General state space Markov chains and MCMC algorithms
- Mathematics, Computer Science
- 2004
This paper surveys various results about Markov chains on gen- eral (non-countable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the…
The horseshoe estimator for sparse signals
- Mathematics
- 2010
This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator's advantages over…
Batch means and spectral variance estimators in Markov chain Monte Carlo
- Mathematics
- 2010
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based…
Fixed-Width Output Analysis for Markov Chain Monte Carlo
- Mathematics
- 2006
Markov chain Monte Carlo is a method of producing a correlated sample to estimate features of a target distribution through ergodic averages. A fundamental question is when sampling should stop; that…
Handling Sparsity via the Horseshoe
- Computer ScienceAISTATS
- 2009
This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior, which is a member of the family of multivariate scale mixtures of normals and closely related to widely used approaches for sparse Bayesian learning.
Adaptive Sparseness for Supervised Learning
- Computer ScienceIEEE Trans. Pattern Anal. Mach. Intell.
- 2003
A Bayesian approach to supervised learning, which leads to sparse solutions; that is, in which irrelevant parameters are automatically set exactly to zero, and involves no tuning or adjustment of sparseness-controlling hyperparameters.