# An empirical $G$-Wishart prior for sparse high-dimensional Gaussian graphical models

@article{Liu2019AnE, title={An empirical \$G\$-Wishart prior for sparse high-dimensional Gaussian graphical models}, author={Chang Liu and Ryan Martin}, journal={arXiv: Statistics Theory}, year={2019} }

In Gaussian graphical models, the zero entries in the precision matrix determine the dependence structure, so estimating that sparse precision matrix and, thereby, learning this underlying structure, is an important and challenging problem. We propose an empirical version of the $G$-Wishart prior for sparse precision matrices, where the prior mode is informed by the data in a suitable way. Paired with a prior on the graph structure, a marginal posterior distribution for the same is obtained… Expand

#### Figures and Tables from this paper

#### 5 Citations

Empirical Priors for Prediction in Sparse High-dimensional Linear Regression

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2020

A Bernstein--von Mises theorem is established which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability and the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods. Expand

Precision Matrix Estimation under the Horseshoe-like Prior-Penalty Dual

- Mathematics
- 2021

The problem of precision matrix estimation in a multivariate Gaussian model is fundamental to network estimation. Although there exist both Bayesian and frequentist approaches to this, it is… Expand

Variational approximations of empirical Bayes posteriors in high-dimensional linear models

- Mathematics
- 2020

In high-dimensions, the prior tails can have a significant effect on both posterior computation and asymptotic concentration rates. To achieve optimal rates while keeping the posterior computations… Expand

The Beta-Mixture Shrinkage Prior for Sparse Covariances with Posterior Minimax Rates

- Mathematics
- 2021

Statistical inference for sparse covariance matrices is crucial to reveal dependence structure of large multivariate data sets, but lacks scalable and theoretically supported Bayesian methods. In… Expand

Bayesian inference for high-dimensional decomposable graphs

- Mathematics
- 2020

In this paper, we consider high-dimensional Gaussian graphical models where the true underlying graph is decomposable. A hierarchical $G$-Wishart prior is proposed to conduct a Bayesian inference for… Expand

#### References

SHOWING 1-10 OF 45 REFERENCES

Bayesian structure learning in graphical models

- Mathematics, Computer Science
- J. Multivar. Anal.
- 2015

This paper considers the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, where the dimension p may be large, and proposes a fast computational method for approximating the posterior probabilities of various graphs using the Laplace approximation approach. Expand

Posterior convergence rates for estimating large precision matrices using graphical models

- Mathematics
- 2014

We consider Bayesian estimation of a $p\times p$ precision matrix, when $p$ can be much larger than the available sample size $n$. It is well known that consistent estimation in such ultra-high… Expand

Empirical Priors for Prediction in Sparse High-dimensional Linear Regression

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2020

A Bernstein--von Mises theorem is established which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability and the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods. Expand

A Monte Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models

- Mathematics
- 2005

A centred Gaussian model that is Markov with respect to an undirected graph G is characterised by the parameter set of its precision matrices which is the cone M-super-p(G) of positive definite… Expand

Data-driven priors and their posterior concentration rates

- Mathematics
- 2019

In high-dimensional problems, choosing a prior distribution such that the corresponding posterior has desirable practical and theoretical properties can be challenging. This begs the question: can… Expand

Computational Aspects Related to Inference in Gaussian Graphical Models With the G-Wishart Prior

- Mathematics
- 2011

We describe a comprehensive framework for performing Bayesian inference for Gaussian graphical models based on the G-Wishart prior with a special focus on efficiently including nondecomposable graphs… Expand

Laplace Approximation in High-Dimensional Bayesian Regression

- Mathematics
- 2015

We consider Bayesian variable selection in sparse high-dimensional regression, where the number of covariates p may be large relative to the sample size n, but at most a moderate number q of… Expand

The Bayesian Covariance Lasso.

- Mathematics, Medicine
- Statistics and its interface
- 2013

A new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix, which is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Expand

High-dimensional graphs and variable selection with the Lasso

- Mathematics
- 2006

The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at… Expand

A Constrained ℓ1 Minimization Approach to Sparse Precision Matrix Estimation

- Mathematics
- 2011

This article proposes a constrained ℓ1 minimization method for estimating a sparse inverse covariance matrix based on a sample of n iid p-variate random variables. The resulting estimator is shown to… Expand