Efficient Sampling for Gaussian Linear Regression With Arbitrary Priors

@article{Hahn2018EfficientSF,
  title={Efficient Sampling for Gaussian Linear Regression With Arbitrary Priors},
  author={P. Richard Hahn and Jingyu He and Hedibert Freitas Lopes},
  journal={Journal of Computational and Graphical Statistics},
  year={2018},
  volume={28},
  pages={142 - 154}
}
ABSTRACT This article develops a slice sampler for Bayesian linear regression models with arbitrary priors. The new sampler has two advantages over current approaches. One, it is faster than many custom implementations that rely on auxiliary latent variables, if the number of regressors is large. Two, it can be used with any prior with a density function that can be evaluated up to a normalizing constant, making it ideal for investigating the properties of new shrinkage priors without having to… 

Scalable MCMC for Bayes Shrinkage Priors

This paper proposes an MCMC algorithm for computation in high-dimensional models that combines blocked Gibbs, Metropolis-Hastings, and slice sampling, and shows the scalability of the algorithm in simulations with up to 20,000 predictors.

Geometric convergence of elliptical slice sampling

Under weak regularity assumptions on the posterior density it is shown that the corresponding Markov chain is geometrically ergodic and therefore yield qualitative convergence guarantees and a dimension-independent performance of elliptical slice sampling even in situations where the ergodicity result does not apply.

Testing Sparsity-Inducing Penalties

Abstract Many penalized maximum likelihood estimators correspond to posterior mode estimators under specific prior distributions. Appropriateness of a particular class of penalty functions can

The reciprocal Bayesian LASSO

It is shown that the Bayesian formulation of the rLASSO problem outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference.

Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in ‘large n & large p’ Bayesian sparse regression

This article presents a novel algorithm to speed up posterior inference, in this case cutting the computation time from two weeks to less than a day, and applies it to a clinically relevant large-scale observational study designed to assess the relative risk of adverse events from two alternative blood anti-coagulants.

Bayesian Matrix Completion Approach to Causal Inference with Panel Data

This study proposes a new Bayesian approach to infer binary treatment effects by completing a matrix composed of realized and potential untreated outcomes using a data augmentation technique and develops a tailored prior that helps in the identification of parameters.

Efficient Bayesian Inference for Nonlinear State Space Models With Univariate Autoregressive State Equation

A Gibbs sampler for general nonlinear state space models with an univariate autoregressive state equation is presented and compared to relevant benchmark models, such as the DCC-GARCH or a Student’s t copula model, with respect to predictive accuracy shows the superior performance of the proposed approach.

Faster MCMC for Gaussian latent position network models

This article proposes an alternative Markov chain Monte Carlo strategy—defined using a combination of split Hamiltonian Monte Carlo and Firefly Monte Carlo—that leverages the posterior distribution’s functional form for more efficient posterior computation and demonstrates that these strategies outperform Metropolis within Gibbs and other algorithms on synthetic networks.

The Hastings algorithm at fifty

The majority of algorithms used in practice today involve the Hastings algorithm, which generalizes the Metropolis algorithm to allow a much broader class of proposal distributions instead of just symmetric cases.

Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in "large n & large p" sparse Bayesian regression

This article presents a novel algorithm to speed up posterior computation in sparse regression applications by developing a theory of prior-preconditioning, and applies it to a clinically relevant large-scale observational study designed to assess the relative risk of adverse events from two alternative blood anti-coagulants.

References

SHOWING 1-10 OF 31 REFERENCES

Bayesian Factor Model Shrinkage for Linear IV Regression With Many Instruments

A slice sampler is developed, which leverages a decomposition of the likelihood function that is a Bayesian analogue to two-stage least squares, and a new predictor-dependent shrinkage prior is developed specifically for the many instruments setting.

Generalized Beta Mixtures of Gaussians

A new class of normal scale mixtures is proposed through a novel generalized beta distribution that encompasses many interesting priors as special cases and develops a class of variational Bayes approximations that will scale more efficiently to the types of truly massive data sets that are now encountered routinely.

Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction

We study the classic problem of choosing a prior distribution for a location parameter β = (β1, . . . , βp) as p grows large. First, we study the standard “global-local shrinkage” approach, based on

Inference with normal-gamma prior distributions in regression problems

This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken

Dirichlet–Laplace Priors for Optimal Shrinkage

This article proposes a new class of Dirichlet–Laplace priors, which possess optimal posterior concentration and lead to efficient posterior computation.

Elliptical slice sampling

A new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors is presented, which has simple, generic code applicable to many models, and works well for a variety of Gaussian process based models.

The Bayesian Lasso

The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.

GENERALIZED DOUBLE PARETO SHRINKAGE.

The properties of the maximum a posteriori estimator are investigated, as sparse estimation plays an important role in many problems, connections with some well-established regularization procedures are revealed, and some asymptotic results are shown.

Bayesian lasso regression

New aspects of the broader Bayesian treatment of lasso regression are introduced, and it is shown that the standard lasso prediction method does not necessarily agree with model-based, Bayesian predictions.

Scalable MCMC for Bayes Shrinkage Priors

This paper proposes an MCMC algorithm for computation in high-dimensional models that combines blocked Gibbs, Metropolis-Hastings, and slice sampling, and shows the scalability of the algorithm in simulations with up to 20,000 predictors.