Testing homogeneity of high-dimensional covariance matrices

@article{Zheng2019TestingHO,
  title={Testing homogeneity of high-dimensional covariance matrices},
  author={Shu-rong Zheng and Ruitao Lin and Jianhua Guo and Guosheng Yin},
  journal={Statistica Sinica},
  year={2019}
}
Testing homogeneity of multiple high-dimensional covariance matrices is becoming more critical in multivariate statistical analysis owing to the emergence of big data. Many existing homogeneity tests for high-dimensional covariance matrices mainly focus on two populations, and they often target at some specific situations, for example, either sparse alternatives or dense alternatives, thus the available methods are not suitable for general cases with multiple groups. To accommodate various… 

Figures and Tables from this paper

Use of Random Integration to Test Equality of High Dimensional Covariance Matrices
TLDR
This work uses a novel use of random integration to test the equality of high-dimensional covariance matrices without assuming parametric distributions for the two underlying populations, even if the dimension is much larger than the sample size.
Two-sample tests for high-dimensional covariance matrices using both difference and ratio
By borrowing strengths from the difference and ratio between two sample covariance matrices, we propose three tests for testing the equality of two high-dimensional population covariance matrices.
Testing the equality of multiple high-dimensional covariance matrices
Bayesian Optimal Two-sample Tests in High-dimension
TLDR
This work develops Bayesian two-sample tests employing a divide-andconquer idea, which is powerful especially when the difference between two populations is sparse but large and allow scalable computations even in high-dimensions.

References

SHOWING 1-10 OF 25 REFERENCES
Two-Sample Covariance Matrix Testing and Support Recovery in High-Dimensional and Sparse Settings
TLDR
A new test for testing the hypothesis H 0 is proposed and investigated to enjoy certain optimality and to be especially powerful against sparse alternatives and applications to gene selection are discussed.
Global Testing and Large-Scale Multiple Testing for High-Dimensional Covariance Structures
TLDR
This review provides a selective survey of some recent developments in hypothesis testing for high-dimensional covariance structures, including global testing for the overall pattern of the covariance structure and simultaneous testing of a large collection of hypotheses on the local covariances structures with false discovery proportion and false discovery rate control.
Testing High-Dimensional Covariance Matrices Under the Elliptical Distribution and Beyond
We develop tests for high-dimensional covariance matrices under a generalized elliptical model. Our tests are based on a central limit theorem (CLT) for linear spectral statistics of the sample
Testing the equality of several covariance matrices with fewer observations than the dimension
Corrections to LRT on large-dimensional covariance matrix by RMT
In this paper, we give an explanation to the failure of two likelihood ratio procedures for testing about covariance matrices from Gaussian populations when the dimension p is large compared to the
Regularized estimation of large covariance matrices
TLDR
If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix.
A test for the equality of covariance matrices when the dimension is large relative to the sample sizes
A new approach to Cholesky-based covariance regularization in high dimensions
TLDR
A new regression interpretation of the Cholesky factor of the covariance matrix is proposed, as opposed to the well-known regression interpretation to lead to a new class of regularized covariance estimators suitable for high-dimensional problems.
On Consistency and Sparsity for Principal Components Analysis in High Dimensions
  • I. Johnstone, A. Lu
  • Computer Science, Mathematics
    Journal of the American Statistical Association
  • 2009
TLDR
A simple algorithm for selecting a subset of coordinates with largest sample variances is provided, and it is shown that if PCA is done on the selected subset, then consistency is recovered, even if p(n) ≫ n.
...
...