Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles

@article{Adamczak2009QuantitativeEO,
  title={Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles},
  author={Radosław Adamczak and Alexander E. Litvak and Alain Pajor and Nicole Tomczak-Jaegermann},
  journal={Journal of the American Mathematical Society},
  year={2009},
  volume={23},
  pages={535-561}
}
Let K be an isotropic convex body in Rn. Given e > 0, how many independent points Xi uniformly distributed on K are neededfor the empirical covariance matrix to approximate the identity up to e with overwhelming probability? Our paper answers this question from [12]. More precisely, let X ∈ Rn be a centered random vector with a log-concave distribution and with the identity as covariance matrix. An example of such a vector X is a random point in an isotropic convex body. We show that for any e… 
On the convergence of the extremal eigenvalues of empirical covariance matrices with dependence
Consider a sample of a centered random vector with unit covariance matrix. We show that under certain regularity assumptions, and up to a natural scaling, the smallest and the largest eigenvalues of
Invertibility via distance for noncentered random matrices with continuous distributions
TLDR
The method is principally different from a standard approach involving a decomposition of the unit sphere and coverings, as well as an approach of Sankar-Spielman-Teng for non-centered Gaussian matrices.
Covariance estimation for distributions with 2+ε moments
We study the minimal sample size N=N(n) that suffices to estimate the covariance matrix of an n-dimensional distribution by the sample covariance matrix in the operator norm, with an arbitrary fixed
How Close is the Sample Covariance Matrix to the Actual Covariance Matrix?
Given a probability distribution in ℝn with general (nonwhite) covariance, a classical estimator of the covariance matrix is the sample covariance matrix obtained from a sample of N independent
COVARIANCE ESTIMATION FOR DISTRIBUTIONS WITH 2+epsilon MOMENTS
We study the minimal sample size N = N ( n ) that suffices to estimate the covariance matrix of an n -dimensional distribution by the sample covariance matrix in the operator norm, with an arbitrary
A pr 2 01 8 Approximating the covariance ellipsoid
We explore ways in which the covariance ellipsoid B = {v ∈ R : E 〈X, v〉2 ≤ 1} of a centred random vector X in R can be approximated by a simple set. The data one is given for constructing the
Tail estimates for norms of sums of log‐concave random vectors
We establish new tail estimates for order statistics and for the Euclidean norms of projections of an isotropic log‐concave random vector. More generally, we prove tail estimates for the norms of
An efficiency upper bound for inverse covariance estimation
We derive a quantitative upper bound for the efficiency of estimating entries in the inverse covariance matrix of a high dimensional distribution. We show that in order to approximate an off-diagonal
...
...

References

SHOWING 1-10 OF 41 REFERENCES
Restricted Isometry Property of Matrices with Independent Columns and Neighborly Polytopes by Random Sampling
This paper considers compressed sensing matrices and neighborliness of a centrally symmetric convex polytope generated by vectors ±X1,…,±XN∈ℝn, (N≥n). We introduce a class of random sampling matrices
On the Limiting Empirical Measure of the sum of rank one matrices with log-concave distribution
We consider $n\times n$ real symmetric and hermitian random matrices $H_{n,m}$ equals the sum of a non-random matrix $H_{n}^{(0)}$ matrix and the sum of $m$ rank-one matrices determined by $m$ i.i.d.
On weakly bounded empirical processes
Let F be a class of functions on a probability space (Ω, μ) and let X1,...,Xk be independent random variables distributed according to μ. We establish an upper bound that holds with high probability
Sampling convex bodies: a random matrix approach
We prove the following result: for any e > 0, only C(e)n sample points are enough to obtain (1 + e)-approximation of the inertia ellipsoid of an unconditional convex body in R". Moreover, for any p >
Random walks and an O * ( n 5 ) volume algorithm for convex bodies
TLDR
This algorithm introduces three new ideas: the use of the isotropic position (or at least an approximation of it) for rounding; the separation of global obstructions and local obstructions for fast mixing; and a stepwise interlacing of rounding and sampling.
Random walks and an O*(n5) volume algorithm for convex bodies
TLDR
This work introduces three new ideas: the use of the isotropic position (or at least an approximation of it) for rounding, the separation of global obstructions and local obstructions for fast mixing, and a stepwise interlacing of rounding and sampling.
Random Points in Isotropic Unconditional Convex Bodies
The paper considers three questions about independent random points uniformly distributed in isotropic symmetric convex bodies K, T1,…,Ts. (a) Let ɛ ∈ (0,1) and let x1,…, xN be chosen from K. Is it
Convex set functions ind-space
Given subsets A and B of Euclidean d-space R a and 0 ~ 0, we set A + B -{x + Y l x E A, y E B} and OA = {Ox Ix 6 A }. Further given a convex subset g2 of R d we shall say that a set function F : 2 ~
Weak Convergence and Empirical Processes: With Applications to Statistics
TLDR
This chapter discusses Convergence: Weak, Almost Uniform, and in Probability, which focuses on the part of Convergence of the Donsker Property which is concerned with Uniformity and Metrization.
Probability in Banach Spaces: Isoperimetry and Processes
Notation.- 0. Isoperimetric Background and Generalities.- 1. Isoperimetric Inequalities and the Concentration of Measure Phenomenon.- 2. Generalities on Banach Space Valued Random Variables and
...
...