Sample covariance matrices of heavy-tailed distributions

@article{Tikhomirov2016SampleCM,
  title={Sample covariance matrices of heavy-tailed distributions},
  author={Konstantin E. Tikhomirov},
  journal={arXiv: Probability},
  year={2016}
}
Let $p>2$, $B\geq 1$, $N\geq n$ and let $X$ be a centered $n$-dimensional random vector with the identity covariance matrix such that $\sup\limits_{a\in S^{n-1}}{\mathrm E}|\langle X,a\rangle|^p\leq B$. Further, let $X_1,X_2,\dots,X_N$ be independent copies of $X$, and $\Sigma_N:=\frac{1}{N}\sum_{i=1}^N X_i {X_i}^T$ be the sample covariance matrix. We prove that $$K^{-1}\|\Sigma_N-I_n\|_{2\to 2}\leq\frac{1}{N}\max\limits_{i\leq N}\|X_i\|^2 +\Bigl(\frac{n}{N}\Bigr)^{1-2/p}\log^4\frac{N}{n}+\Bigl… 
Approximating the covariance ellipsoid
TLDR
It is shown that if the slabs are replaced by randomly generated ellipsoids defined using X, the same degree of approximation is true when $N \geq c_1d\eta^{-2}\log(2/\eta)$.
The smallest singular value of a shifted d-regular random square matrix
We derive a lower bound on the smallest singular value of a random d-regular matrix, that is, the adjacency matrix of a random d-regular directed graph. Specifically, let $$C_1<d< c n/\log ^2
An upper bound on the smallest singular value of a square random matrix
  • K. Tatarko
  • Computer Science, Mathematics
    J. Complex.
  • 2018
TLDR
The upper bound for a smallest singular value $s_n(A)$ is of order $n^{-\frac12}$ with probability close to one under additional assumption on entries of $A$ that $\mathbb{E}a^4_{ij} < \infty$.
Polynomial Threshold Functions, Hyperplane Arrangements, and Random Tensors
TLDR
The problem of how many low-degree polynomial threshold functions for any higher degrees is settled, showing that $log_2 T(n,d) \approx n \binom{n}{\le d}$.
Extending the small-ball method
The small-ball method was introduced as a way of obtaining a high probability, isomorphic lower bound on the quadratic empirical process, under weak assumptions on the indexing class. The key
Extending the scope of the small-ball method
The small-ball method was introduced as a way of obtaining a high probability, isomorphic lower bound on the quadratic empirical process, under weak assumptions on the indexing class. The key
Random embeddings with an almost Gaussian distortion
Let X be a symmetric, isotropic random vector in R and let X1..., Xn be independent copies of X. We show that under mild assumptions on ‖X‖2 (a suitable thin-shell bound) and on the tail-decay of the
Regression adjustment in randomized experiments with a diverging number of covariates
TLDR
The results are appealing because he uses Neyman’s randomization model without imposing any parametric assumptions, and the consistency and asymptotic normality of his estimator hold even if the linear model is misspecified, but the analysis requires novel analytic tools for sampling without replacement.
Berry-Esseen Bounds for Projection Parameters and Partial Correlations with Increasing Dimension
The linear regression model can be used even when the true regression function is not linear. The resulting estimated linear function is the best linear approximation to the regression function and
On Monte-Carlo methods in convex stochastic optimization
We develop a novel procedure for estimating the optimizer of general convex stochastic optimization problems of the form minx∈X E[F (x, ξ)], when the given data is a finite independent sample
...
1
2
3
...

References

SHOWING 1-10 OF 36 REFERENCES
Sharp bounds on the rate of convergence of the empirical covariance matrix
Let $X_1,..., X_N\in\R^n$ be independent centered random vectors with log-concave distribution and with the identity as covariance matrix. We show that with overwhelming probability at least $1 - 3
Universality of covariance matrices
In this paper we prove the universality of covariance matrices of the form $H_{N\times N}={X}^{\dagger}X$ where $X$ is an ${M\times N}$ rectangular matrix with independent real valued entries
Bounding the smallest singular value of a random matrix without concentration
Given $X$ a random vector in ${\mathbb{R}}^n$, set $X_1,...,X_N$ to be independent copies of $X$ and let $\Gamma=\frac{1}{\sqrt{N}}\sum_{i=1}^N e_i$ be the matrix whose rows are
On the interval of fluctuation of the singular values of random matrices
TLDR
It is proved that with high probability A/A has the Restricted Isometry Property (RIP) provided that Euclidean norms $|X_i|$ are concentrated around $\sqrt{n}$.
The limit of the smallest singular value of random matrices with i.i.d. entries
Let $\{a_{ij}\}$ $(1\le i,j<\infty)$ be i.i.d. real valued random variables with zero mean and unit variance and let an integer sequence $(N_m)_{m=1}^\infty$ satisfy $m/N_m\longrightarrow z$ for some
Sharp lower bounds on the least singular value of a random matrix without the fourth moment condition
We obtain non-asymptotic lower bounds on the least singular value of ${\mathbf X}_{pn}^\top/\sqrt{n}$, where ${\mathbf X}_{pn}$ is a $p\times n$ random matrix whose columns are independent copies of
On the Increase of Dispersion of Sums of Independent Random Variables
Let $\xi _1 ,\xi _2 , \cdots ,\xi _n $ be independent random variables, \[ Q_k \{ l \} = \mathop {\sup }\limits_x {\bf P} \{ x \leqq \xi _k \leqq x + l \}, \]\[ Q(L) = \mathop {\sup }\limits_x {\bf
The smallest singular value of random rectangular matrices with no moment assumptions on entries
Let δ > 1 and β > 0 be some real numbers. We prove that there are positive u, v, N0 depending only on β and δ with the following property: for any N,n such that N ≥ max(N0, δn), any N × n random
How Close is the Sample Covariance Matrix to the Actual Covariance Matrix?
Given a probability distribution in ℝn with general (nonwhite) covariance, a classical estimator of the covariance matrix is the sample covariance matrix obtained from a sample of N independent
Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles
Let K be an isotropic convex body in Rn. Given e > 0, how many independent points Xi uniformly distributed on K are neededfor the empirical covariance matrix to approximate the identity up to e with
...
1
2
3
4
...