Estimation When Both Covariance and Precision Matrices are Sparse

@article{MacNamara2021EstimationWB,
  title={Estimation When Both Covariance and Precision Matrices are Sparse},
  author={Shev MacNamara and Erik Schl{\"o}gl and Zdravko I. Botev},
  journal={2021 Winter Simulation Conference (WSC)},
  year={2021},
  pages={1-11}
}
We offer a method to estimate a covariance matrix in the special case that both the covariance matrix and the precision matrix are sparse - a constraint we call double sparsity. The estimation method is maximum likelihood, subject to the double sparsity constraint. In our method, only a particular class of sparsity pattern is allowed: both the matrix and its inverse must be subordinate to the same chordal graph. Compared to a naive enforcement of double sparsity, our chordal graph approach… 
1 Citations

Figures from this paper

References

SHOWING 1-10 OF 25 REFERENCES

A Local Inverse Formula and a Factorization

When a matrix has a banded inverse there is a remarkable formula that quickly computes that inverse, using only local information in the original matrix. This local inverse formula holds more

Regularized estimation of large covariance matrices

If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix.

A new approach to Cholesky-based covariance regularization in high dimensions

A new regression interpretation of the Cholesky factor of the covariance matrix is proposed, as opposed to the well-known regression interpretation to lead to a new class of regularized covariance estimators suitable for high-dimensional problems.

High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence

The first result establishes consistency of the estimate b � in the elementwise maximum-norm, which allows us to derive convergence rates in Frobenius and spectral norms, and shows good correspondences between the theoretical predictions and behavior in simulations.

Local inversion of matrices with sparse inverses

Chordal Graphs and Semidefinite Optimization

Thissurvey covers the theory and applications of chordal graphs, with anemphasis on algorithms developed in the literature on sparse Choleskyfactorization, and points out the connections with related topics outside semidefinite optimization, such as probabilistic networks, matrix completion problems, and partial separability in nonlinear optimization.

The Interplay of Ranks of Submatrices

A banded invertible matrix T has a remarkable inverse. All "upper" and "lower" submatrices of T-1 have low rank (depending on the bandwidth in T). The exact rank condition is known, and it allows

Extensions of band matrices with band inverses

Honey, I Shrunk the Sample Covariance Matrix

The central message of this article is that no one should use the sample covariance matrix for portfolio optimization. It is subject to estimation error of the kind most likely to perturb a