The Bayesian Lasso

@article{Park2008TheBL,
  title={The Bayesian Lasso},
  author={Trevor H Park and George Casella},
  journal={Journal of the American Statistical Association},
  year={2008},
  volume={103},
  pages={681 - 686}
}
  • Trevor H Park, G. Casella
  • Published 1 June 2008
  • Computer Science, Mathematics
  • Journal of the American Statistical Association
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors. Gibbs sampling from this posterior is possible using an expanded hierarchy with conjugate normal priors for the regression parameters and independent exponential priors on their variances. A connection with the inverse-Gaussian distribution provides tractable full conditional distributions. The… 
Bayesian lasso regression
TLDR
New aspects of the broader Bayesian treatment of lasso regression are introduced, and it is shown that the standard lasso prediction method does not necessarily agree with model-based, Bayesian predictions.
A New Bayesian Lasso.
TLDR
This paper considers a fully Bayesian treatment that leads to a new Gibbs sampler with tractable full conditional posterior distributions and shows that the new algorithm has good mixing property and performs comparably to the existing Bayesian method in terms of both prediction accuracy and variable selection.
Priors on the Variance in Sparse Bayesian Learning; the demi-Bayesian Lasso
TLDR
This work outlines simple modifications of existing algorithms to solve this new variant which essentially uses type-II maximum likelihood to fit the Bayesian Lasso model and proposes an Elastic-net heuristic to help with modeling correlated inputs.
The Bayesian adaptive lasso regression.
Approximate Gibbs sampler for Bayesian Huberized lasso
TLDR
A new posterior computation algorithm for the Bayesian Huberized lasso regression is proposed based on the approximation of full conditional distribution and it is possible to estimate a tuning parameter for robustness of the pseudo-Huber loss function.
Sparsity via new Bayesian Lasso
TLDR
This paper proposed Scale Mixture of Normals mixing with Rayleigh density on their variances to represent the double exponential distribution and proposed Hierarchical model formulation presented with Gibbs sampler under SMNR as alternative Bayesian analysis of minimization problem of classical lasso.
Sparse modifying algorithm in Bayesian lasso
TLDR
In the present pape4 the authors propase aiL erncient algorithm which modifies the Bayesian lasso estimates so as to be sparse, to investigate the ernciency of the proposed aLgorithm.
High-Dimensional Bayesian Regularised Regression with the BayesReg Package
TLDR
This paper introduces bayesreg, a new toolbox for fitting Bayesian penalized regression models with continuous shrinkage prior densities, and features Bayesian linear regression with Gaussian or heavy-tailed error models and Bayesian logistic regression with ridge, lasso, horseshoes and horseshoe estimators.
Robust Bayesian Regularized Estimation Based on Regression Model
TLDR
A new robust coefficient estimation and variable selection method based on Bayesian adaptive Lasso regression, developed based on the Bayesian hierarchical model framework, where the distribution is treated as a mixture of normal and gamma distributions and put different penalization parameters for different regression coefficients.
Sparse Bayesian linear regression using generalized normal priors
A sparse Bayesian linear regression model is proposed that generalizes the Bayesian Lasso to a class of Bayesian models with scale mixtures of normal distributions as priors for the regression
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
Bayesian Variable Selection in Linear Regression
Abstract This article is concerned with the selection of subsets of predictor variables in a linear regression model for the prediction of a dependent variable. It is based on a Bayesian approach,
Outlier Models and Prior Distributions in Bayesian Linear Regression
SUMMARY Bayesian inference in regression models is considered using heavy-tailed error distri- butions to accommodate outliers. The particular class of distributions that can be con- structed as
Penalized regression, standard errors, and Bayesian lassos
TLDR
The performance of the Bayesian lassos is compared to their fre- quentist counterparts using simulations, data sets that previous lasso papers have used, and a di-cult modeling problem for predicting the collapse of governments around the world.
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Efficient Empirical Bayes Variable Selection and Estimation in Linear Models
TLDR
Simulations and real examples show that the proposed method is very competitive in terms of variable selection, estimation accuracy, and computation speed compared with other variable selection and estimation methods.
Flexible empirical Bayes estimation for wavelets
Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean‐squared error (MSE) properties, the selection
APPROACHES FOR BAYESIAN VARIABLE SELECTION
This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation
Variable selection via Gibbs sampling
Abstract A crucial problem in building a multiple regression model is the selection of predictors to include. The main thrust of this article is to propose and develop a procedure that uses
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Adaptive Sparseness for Supervised Learning
TLDR
A Bayesian approach to supervised learning, which leads to sparse solutions; that is, in which irrelevant parameters are automatically set exactly to zero, and involves no tuning or adjustment of sparseness-controlling hyperparameters.
...
1
2
3
4
5
...