Estimation When Both Covariance and Precision Matrices are Sparse
@article{MacNamara2021EstimationWB, title={Estimation When Both Covariance and Precision Matrices are Sparse}, author={Shev MacNamara and Erik Schl{\"o}gl and Zdravko I. Botev}, journal={2021 Winter Simulation Conference (WSC)}, year={2021}, pages={1-11} }
We offer a method to estimate a covariance matrix in the special case that both the covariance matrix and the precision matrix are sparse - a constraint we call double sparsity. The estimation method is maximum likelihood, subject to the double sparsity constraint. In our method, only a particular class of sparsity pattern is allowed: both the matrix and its inverse must be subordinate to the same chordal graph. Compared to a naive enforcement of double sparsity, our chordal graph approach…
Figures from this paper
One Citation
Robust sparse precision matrix estimation for high-dimensional compositional data
- Computer ScienceStatistics & Probability Letters
- 2022
References
SHOWING 1-10 OF 25 REFERENCES
A Local Inverse Formula and a Factorization
- Mathematics
- 2018
When a matrix has a banded inverse there is a remarkable formula that quickly computes that inverse, using only local information in the original matrix. This local inverse formula holds more…
Regularized estimation of large covariance matrices
- Mathematics, Computer Science
- 2008
If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix.
A new approach to Cholesky-based covariance regularization in high dimensions
- Mathematics, Computer Science
- 2009
A new regression interpretation of the Cholesky factor of the covariance matrix is proposed, as opposed to the well-known regression interpretation to lead to a new class of regularized covariance estimators suitable for high-dimensional problems.
High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence
- Computer Science, Mathematics
- 2008
The first result establishes consistency of the estimate b � in the elementwise maximum-norm, which allows us to derive convergence rates in Frobenius and spectral norms, and shows good correspondences between the theoretical predictions and behavior in simulations.
Chordal Graphs and Semidefinite Optimization
- Computer ScienceFound. Trends Optim.
- 2015
Thissurvey covers the theory and applications of chordal graphs, with anemphasis on algorithms developed in the literature on sparse Choleskyfactorization, and points out the connections with related topics outside semidefinite optimization, such as probabilistic networks, matrix completion problems, and partial separability in nonlinear optimization.
The Interplay of Ranks of Submatrices
- MathematicsSIAM Rev.
- 2004
A banded invertible matrix T has a remarkable inverse. All "upper" and "lower" submatrices of T-1 have low rank (depending on the bandwidth in T). The exact rank condition is known, and it allows…
Honey, I Shrunk the Sample Covariance Matrix
- Computer Science
- 2003
The central message of this article is that no one should use the sample covariance matrix for portfolio optimization. It is subject to estimation error of the kind most likely to perturb a…