High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence

@article{Ravikumar2008HighdimensionalCE,
  title={High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence},
  author={Pradeep Ravikumar and M. Wainwright and G. Raskutti and Bin Yu},
  journal={Electronic Journal of Statistics},
  year={2008},
  volume={5},
  pages={935-980}
}
Given i.i.d. observations of a random vector X 2 R p , we study the problem of estimating both its covariance matrix � ∗ , and its inverse covariance or concentration matrix � ∗ = (� ∗ ) −1 . We estimate � ∗ by minimizing an l1-penalized log-determinant Bregman divergence; in the multivariate Gaussian case, this approach corresponds to l1-penalized maximum likelihood, and the structure of � ∗ is specified by the graph of an associated Gaussian Markov random field. We analyze the performance of… Expand

Figures from this paper

Tensor Graphical Model: Non-Convex Optimization and Statistical Inference
Covariance Estimation in High Dimensions Via Kronecker Product Expansions
High-dimensional Covariance Learning
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 58 REFERENCES
Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.
Regularized estimation of large covariance matrices
Covariance matrix selection and estimation via penalised normal likelihood
Sparse permutation invariant covariance estimation
On the distribution of the largest eigenvalue in principal components analysis
Optimal rates of convergence for covariance matrix estimation
...
1
2
3
4
5
...