Corpus ID: 208909872

An empirical $G$-Wishart prior for sparse high-dimensional Gaussian graphical models

@article{Liu2019AnE,
  title={An empirical \$G\$-Wishart prior for sparse high-dimensional Gaussian graphical models},
  author={Chang Liu and Ryan Martin},
  journal={arXiv: Statistics Theory},
  year={2019}
}
In Gaussian graphical models, the zero entries in the precision matrix determine the dependence structure, so estimating that sparse precision matrix and, thereby, learning this underlying structure, is an important and challenging problem. We propose an empirical version of the $G$-Wishart prior for sparse precision matrices, where the prior mode is informed by the data in a suitable way. Paired with a prior on the graph structure, a marginal posterior distribution for the same is obtained… Expand
Empirical Priors for Prediction in Sparse High-dimensional Linear Regression
TLDR
A Bernstein--von Mises theorem is established which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability and the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods. Expand
Precision Matrix Estimation under the Horseshoe-like Prior-Penalty Dual
The problem of precision matrix estimation in a multivariate Gaussian model is fundamental to network estimation. Although there exist both Bayesian and frequentist approaches to this, it isExpand
Variational approximations of empirical Bayes posteriors in high-dimensional linear models
In high-dimensions, the prior tails can have a significant effect on both posterior computation and asymptotic concentration rates. To achieve optimal rates while keeping the posterior computationsExpand
The Beta-Mixture Shrinkage Prior for Sparse Covariances with Posterior Minimax Rates
Statistical inference for sparse covariance matrices is crucial to reveal dependence structure of large multivariate data sets, but lacks scalable and theoretically supported Bayesian methods. InExpand
Bayesian inference for high-dimensional decomposable graphs
In this paper, we consider high-dimensional Gaussian graphical models where the true underlying graph is decomposable. A hierarchical $G$-Wishart prior is proposed to conduct a Bayesian inference forExpand

References

SHOWING 1-10 OF 45 REFERENCES
Bayesian structure learning in graphical models
TLDR
This paper considers the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, where the dimension p may be large, and proposes a fast computational method for approximating the posterior probabilities of various graphs using the Laplace approximation approach. Expand
Posterior convergence rates for estimating large precision matrices using graphical models
We consider Bayesian estimation of a $p\times p$ precision matrix, when $p$ can be much larger than the available sample size $n$. It is well known that consistent estimation in such ultra-highExpand
Empirical Priors for Prediction in Sparse High-dimensional Linear Regression
TLDR
A Bernstein--von Mises theorem is established which ensures that the derived empirical Bayes prediction intervals achieve the targeted frequentist coverage probability and the proposed method's strong finite-sample performance in terms of prediction accuracy, uncertainty quantification, and computation time compared to existing Bayesian methods. Expand
A Monte Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models
A centred Gaussian model that is Markov with respect to an undirected graph G is characterised by the parameter set of its precision matrices which is the cone M-super-p(G) of positive definiteExpand
Data-driven priors and their posterior concentration rates
In high-dimensional problems, choosing a prior distribution such that the corresponding posterior has desirable practical and theoretical properties can be challenging. This begs the question: canExpand
Computational Aspects Related to Inference in Gaussian Graphical Models With the G-Wishart Prior
We describe a comprehensive framework for performing Bayesian inference for Gaussian graphical models based on the G-Wishart prior with a special focus on efficiently including nondecomposable graphsExpand
Laplace Approximation in High-Dimensional Bayesian Regression
We consider Bayesian variable selection in sparse high-dimensional regression, where the number of covariates p may be large relative to the sample size n, but at most a moderate number q ofExpand
The Bayesian Covariance Lasso.
TLDR
A new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix, which is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Expand
High-dimensional graphs and variable selection with the Lasso
The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims atExpand
A Constrained ℓ1 Minimization Approach to Sparse Precision Matrix Estimation
This article proposes a constrained ℓ1 minimization method for estimating a sparse inverse covariance matrix based on a sample of n iid p-variate random variables. The resulting estimator is shown toExpand
...
1
2
3
4
5
...