Nonasymptotic support recovery for high‐dimensional sparse covariance matrices

@inproceedings{Kashlak2020NonasymptoticSR,
  title={Nonasymptotic support recovery for high‐dimensional sparse covariance matrices},
  author={Adam B. Kashlak and Linglong Kong},
  year={2020}
}
Correspondence Adam B. Kashlak, Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta T6G 2G1, Canada. Email: kashlak@ualberta.ca For high-dimensional data, the standard empirical estimator for the covariance matrix is very poor, and thus many methods have been proposed to more accurately estimate the covariance structure of high-dimensional data. In this article, we consider estimation under the assumption of sparsity but regularize with respect to the individual… Expand
2 Citations

Figures and Tables from this paper

Non-asymptotic error controlled sparse high dimensional precision matrix estimation
TLDR
This work proposes a novel methodology for estimating high dimensional precision matrices while controlling the false positive rate, percentage of matrix entries incorrectly chosen to be non-zero with finite sample guarantees. Expand
Loss Functions, Axioms, and Peer Review
TLDR
This paper presents a framework inspired by empirical risk minimization (ERM) for learning the community's aggregate mapping and characterize $p=q=1$ as the only choice of these hyperparameters that satisfies three natural axiomatic properties. Expand

References

SHOWING 1-10 OF 33 REFERENCES
Covariance Estimation: The GLM and Regularization Perspectives
Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covarianceExpand
Regularized estimation of large covariance matrices
This paper considers estimating a covariance matrix of p variables from n observations by either banding the sample covariance matrix or estimating a banded version of the inverse of the covariance.Expand
Sparse estimation of a covariance matrix.
TLDR
The proposed penalized maximum likelihood problem is not convex, so the method can be used to solve a previously studied special case in which a desired sparsity pattern is prespecified, and it uses a majorize-minimize approach in which it iteratively solve convex approximations to the original nonconvex problem. Expand
A well-conditioned estimator for large-dimensional covariance matrices
Many applied problems require a covariance matrix estimator that is not only invertible, but also well-conditioned (that is, inverting it does not amplify estimation error). For large-dimensionalExpand
Adaptive Thresholding for Sparse Covariance Matrix Estimation
TLDR
It is shown that the estimators adaptively achieve the optimal rate of convergence over a large class of sparse covariance matrices under the spectral norm, in contrast to the commonly used universal thresholding estimators, which are shown to be suboptimal over the same parameter spaces. Expand
Nonconjugate Bayesian Estimation of Covariance Matrices and its Use in Hierarchical Models
Abstract The problem of estimating a covariance matrix in small samples has been considered by several authors following early work by Stein. This problem can be especially important in hierarchicalExpand
Covariance regularization by thresholding
This paper considers regularizing a covariance matrix of $p$ variables estimated from $n$ observations, by hard thresholding. We show that the thresholded estimate is consistent in the operator normExpand
Positive-Definite ℓ1-Penalized Estimation of Large Covariance Matrices
The thresholding covariance estimator has nice asymptotic properties for estimating sparse large covariance matrices, but it often has negative eigenvalues when used in real data analysis. To fixExpand
Operator norm consistent estimation of large-dimensional sparse covariance matrices
Estimating covariance matrices is a problem of fundamental importance in multivariate statistics. In practice it is increasingly frequent to work with data matrices X of dimension n x p, where p andExpand
Shrinkage estimators for covariance matrices.
TLDR
Two general shrinkage approaches to estimating the covariance matrix and regression coefficients are considered, the first involves shrinking the eigenvalues of the unstructured ML or REML estimator and the second involves shrinking an un Structured estimator toward a structured estimator. Expand
...
1
2
3
4
...