FLEXIBLE COVARIANCE ESTIMATION IN GRAPHICAL GAUSSIAN MODELS

@article{Rajaratnam2008FLEXIBLECE,
  title={FLEXIBLE COVARIANCE ESTIMATION IN GRAPHICAL GAUSSIAN MODELS},
  author={Bala Rajaratnam and H{\'e}l{\`e}ne Massam and Carlos M. Carvalho},
  journal={Annals of Statistics},
  year={2008},
  volume={36},
  pages={2818-2849}
}
In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph G. Working with the W PG family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278-1323] we derive closed-form expressions for Bayes estimators under the entropy and squared-error losses. The W PG family includes the classical inverse of the hyper inverse Wishart but has many more shape parameters, thus allowing for flexibility in… 

Figures and Tables from this paper

Covariance Estimation in Decomposable Gaussian Graphical Models
TLDR
This work derives the derivation and analysis of the minimum variance unbiased estimator (MVUE) in decomposable graphical models and proposes the use of SURE as a constructive mechanism for deriving new covariance estimators.
High dimensional Bayesian inference for Gaussian directed acyclic graph models
TLDR
This paper constructs a family of conjugate priors for the Cholesky parametrization of the covariance matrix of Gaussian models Markov with respect to a decomposable graph to arbitrary DAGs and derives prior distributions for the covariances and precision parameters of the Gaussian DAG Markov models.
Bayesian structure learning in graphical models
Bayesian structural learning and estimation in Gaussian graphical models
TLDR
The mode oriented stochastic search algorithm for Gaussian graphical models is proposed, and a new Laplace approximation method to the normalizing constant of a G-Wishart distribution is developed.
Computational Aspects Related to Inference in Gaussian Graphical Models With the G-Wishart Prior
TLDR
A new method, the mode oriented stochastic search (MOSS), is proposed that extends these techniques and proves superior at quickly finding graphical models with high posterior probability and concludes with a real-world example from the recent covariance estimation literature.
Efficient Gaussian graphical model determination under G-Wishart prior distributions
TLDR
This paper develops the theory and computational details of a novel Markov chain Monte Carlo sampling scheme for Gaussian graphical model determination under G-Wishart prior distributions and generalizes the maximum clique block Gibbs samplers to a class of flexible block Gibbs SAMs and proves its convergence.
Bayesian estimation of a sparse precision matrix
TLDR
This paper proposes a fast computational method for approximating the posterior probabilities of various graphs using the Laplace approximation approach by expanding the posterior density around the posterior mode, which is the graphical lasso by the authors' choice of the prior distribution.
A Multivariate Graphical Stochastic Volatility Model
TLDR
A parsimonious multivariate stochastic volatility model that embeds GGM uncertainty in a larger hierarchical framework is developed, capable of adapting to the extreme swings in market volatility experienced in 2008 after the collapse of Lehman Brothers.
An accurate test for the equality of covariance matrices from decomposable graphical Gaussian models
This paper derives a saddlepoint based approximation for the cumulative distribution function of the Bartlett–Box M‐statistic that tests the equality of covariance matrices for several samples from
...
...

References

SHOWING 1-10 OF 45 REFERENCES
Regularized estimation of large covariance matrices
TLDR
If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix.
Nonconjugate Bayesian Estimation of Covariance Matrices and its Use in Hierarchical Models
TLDR
This work proposes a set of hierarchical priors for the covariance matrix that produce posterior shrinkage toward a specified structure, and addresses the computational difficulties raised by incorporating these priors, and nonconjugate priors in general, into hierarchical models.
Simulation of hyper-inverse Wishart distributions in graphical models
We introduce and exemplify an efficient method for direct sampling from hyper-inverse Wishart distributions. The method relies very naturally on the use of standard junction-tree representation of
Covariance matrix selection and estimation via penalised normal likelihood
TLDR
A nonparametric method for identifying parsimony and for producing a statistically efficient estimator of a large covariance matrix and an algorithm is developed for computing the estimator and selecting the tuning parameter.
Archival Version including Appendicies : Experiments in Stochastic Computation for High-Dimensional Graphical Models
We discuss the implementation, development and performance of methods of stochastic computation in Gaussian graphical models. We view these methods from the perspective of high-dimensional model
Wishart distributions for decomposable graphs
When considering a graphical Gaussian model N G Markov with respect to a decomposable graph G, the parameter space of interest for the precision parameter is the cone P G of positive definite
Shrinkage estimators for covariance matrices.
TLDR
Two general shrinkage approaches to estimating the covariance matrix and regression coefficients are considered, the first involves shrinking the eigenvalues of the unstructured ML or REML estimator and the second involves shrinking an un Structured estimator toward a structured estimator.
Estimation of a Covariance Matrix Using the Reference Prior
Estimation of a covariance matrix ∑ is a notoriously difficult problem; the standard unbiased estimator can be substantially suboptimal. We approach the problem from a noninformative prior Bayesian
A well-conditioned estimator for large-dimensional covariance matrices
High-dimensional graphs and variable selection with the Lasso
TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
...
...