Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior

@article{Liu2018BayesianII,
  title={Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior},
  author={Chang Liu and Yue Yang and Howard D. Bondell and Ryan Martin},
  journal={Statistica Sinica},
  year={2018}
}
In the context of a high-dimensional linear regression model, we propose the use of an empirical correlation-adaptive prior that makes use of information in the observed predictor variable matrix to adaptively address high collinearity, determining if parameters associated with correlated predictors should be shrunk together or kept apart. Under suitable conditions, we prove that this empirical Bayes posterior concentrates around the true sparse parameter at the optimal rate asymptotically. A… 

Figures and Tables from this paper

Empirical Priors for Prediction in Sparse High-dimensional Linear Regression

A Bernstein--von Mises theorem is established which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability and the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods.

Variational approximations of empirical Bayes posteriors in high-dimensional linear models

A variational approximation to the empirical Bayes posterior that is fast to compute and retains the optimal concentration rate properties of the original is developed.

Empirical Priors and Coverage of Posterior Credible Sets in a Sparse Normal Mean Model

Bayesian methods provide a natural means for uncertainty quantification, that is, credible sets can be easily obtained from the posterior distribution. But is this uncertainty quantification valid in

The piranha problem: Large effects swimming in a small pond

In some scientific fields, it is common to have certain variables of interest that are of particular importance and for which there are many studies indicating a relationship with different explanatory

Strategic Bayesian Asset Allocation

Bayesian regularization is shown to not only provide stock selection but also optimal sequential portfolio weights in accordance with tailored MCMC algorithms developed to calculate portfolio weights and perform selection.

References

SHOWING 1-10 OF 34 REFERENCES

Empirical Priors for Prediction in Sparse High-dimensional Linear Regression

A Bernstein--von Mises theorem is established which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability and the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods.

Empirical Bayes posterior concentration in sparse high-dimensional linear models

A new empirical Bayes approach for inference in the normal linear model, using the use of data in the prior in two ways, for centering and regularization, relevant for both estimation and model selection.

Consistent High-Dimensional Bayesian Variable Selection via Penalized Credible Regions

This work proposes a conjugate prior only on the full model parameters and use sparse solutions within posterior credible regions to perform selection, and shows that these sparse solutions can be computed via existing algorithms.

Bayesian variable selection using an adaptive powered correlation prior.

BAYESIAN LINEAR REGRESSION WITH SPARSE PRIORS

Under compatibility conditions on the design matrix, the posterior distribution is shown to contract at the optimal rate for recovery of the unknown sparse vector, and to give optimal prediction of the response vector.

Bayesian variable selection with shrinking and diffusing priors

We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as

Scalable Bayesian Variable Selection Using Nonlocal Prior Densities in Ultrahigh-dimensional Settings.

It is found that Bayesian variable selection procedures based on nonlocal priors are competitive to all other procedures in a range of simulation scenarios, and this favorable performance is explained through a theoretical examination of their consistency properties.

Data-driven priors and their posterior concentration rates

This paper develops a general strategy for constructing a data-driven or empirical prior and sufficient conditions for the corresponding posterior distribution to achieve a certain concentration rate and presents results on both adaptive and non-adaptive rates based on empirical priors.

Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector

A novel empirical Bayes model is proposed that admits a posterior distribution with desirable properties under mild conditions that concentrates on balls, centered at the true mean vector, with squared radius proportional to the minimax rate and its posterior mean is an asymptotically minimax estimator.

Calibration and empirical Bayes variable selection

For the problem of variable selection for the normal linear model, selection criteria such as AIC, C p , BIC and RIC have fixed dimensionality penalties. Such criteria are shown to correspond to