Compressed Covariance Estimation with Automated Dimension Learning

@article{Sabnis2018CompressedCE,
  title={Compressed Covariance Estimation with Automated Dimension Learning},
  author={Gautam S. Sabnis and Debdeep Pati and Anirban Bhattacharya},
  journal={Sankhya A},
  year={2018}
}
We propose a method for estimating a covariance matrix that can be represented as a sum of a low-rank matrix and a diagonal matrix. The proposed method compresses high-dimensional data, computes the sample covariance in the compressed space, and lifts it back to the ambient space via a decompression operation. A salient feature of our approach relative to existing literature on combining sparsity and low-rank structures in covariance matrix estimation is that we do not require the low-rank… 

References

SHOWING 1-10 OF 72 REFERENCES
Sketching for simultaneously sparse and low-rank covariance matrices
  • S. Bahmani, J. Romberg
  • Computer Science, Mathematics
    2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
  • 2015
TLDR
This work shows that if the sketching vectors aℓ have a special structure, then it can be used to use straightforward two-stage algorithm that exploits this structure and takes direct advantage of the low-rank structure of Σ by only manipulating matrices that are far smaller than the original covariance matrix.
Covalsa: Covariance estimation from compressive measurements using alternating minimization
TLDR
A class of convex formulations and respective solutions to the high-dimensional covariance matrix estimation problem under compressive measurements, imposing either Toeplitz, sparseness, null-pattern, low rank, or low permuted rank structure on the solution, in addition to positive semi-definiteness.
Estimating structured high-dimensional covariance and precision matrices: Optimal rates and adaptive estimation
TLDR
Minimax rates of convergence for estimating several classes of structured covariance and precision matrices, including bandable, Toeplitz, and sparse covariance matrices as well as sparse precisionMatrices, are given under the spectral norm loss.
Optimal estimation and rank detection for sparse spiked covariance matrices
TLDR
The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariances matrices.
Bayesian Compressed Regression
TLDR
This work randomly compress the predictors prior to analysis to dramatically reduce storage and computational bottlenecks, speeding up computation by many orders of magnitude while also bypassing robustness issues due to convergence and mixing problems with MCMC.
Exact and Stable Covariance Estimation From Quadratic Sampling via Convex Programming
TLDR
This paper explores a quadratic (or rank-one) measurement model which imposes minimal memory requirements and low computational complexity during the sampling process, and is shown to be optimal in preserving various low-dimensional covariance structures.
Sparse Principal Component Analysis and Iterative Thresholding
TLDR
Under a spiked covariance model, a new iterative thresholding approach for estimating principal subspaces in the setting where the leading eigenvectors are sparse is proposed and it is found that the new approach recovers the principal subspace and leading eignevectors consistently, and even optimally, in a range of high-dimensional sparse settings.
Adaptive Thresholding for Sparse Covariance Matrix Estimation
TLDR
It is shown that the estimators adaptively achieve the optimal rate of convergence over a large class of sparse covariance matrices under the spectral norm, in contrast to the commonly used universal thresholding estimators, which are shown to be suboptimal over the same parameter spaces.
First-Order Methods for Sparse Covariance Selection
TLDR
This work first formulate a convex relaxation of this combinatorial problem, then detail two efficient first-order algorithms with low memory requirements to solve large-scale, dense problem instances.
A Random Matrix-Theoretic Approach to Handling Singular Covariance Estimates
TLDR
A radically new approach to deal with the case where N <; M such that this estimate is singular (noninvertible) and therefore fundamentally bad is presented, based on the idea of dimensionality reduction through an ensemble of isotropically random unitary matrices.
...
1
2
3
4
5
...