#### Filter Results:

#### Publication Year

2003

2016

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

Estimating the eigenvalues of a population covariance matrix from a sample covariance matrix is a problem of fundamental importance in multivariate statistics; the eigenvalues of covariance matrices play a key role in many widely techniques, in particular in Principal Component Analysis (PCA). In many modern data analysis problems, statisticians are faced… (More)

Estimating covariance matrices is a problem of fundamental importance in multivariate statistics. In practice it is increasingly frequent to work with data matrices X of dimension n × p, where p and n are both large. Results from random matrix theory show very clearly that in this setting, standard estimators like the sample covariance matrix perform in… (More)

On the largest eigenvalue of Wishart matrices with identity covariance when n, p and p/n → ∞ Abstract Let X be a n × p matrix and l 1 the largest eigenvalue of the covariance matrix X * X. The " null case " where X i,j ∼ N (0, 1) is of particular interest for principal component analysis. For this model, when n, p → ∞ and n/p → γ ∈ R * + , it was shown in… (More)

We place ourselves in the setting of high-dimensional statistical inference, where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observations n. More formally we study the asymptotic properties of correlation and covariance matrices under the setting that p/n → ρ ∈ (0, ∞), for general population… (More)

The problem of understanding the limiting behavior of the largest eigenvalue of sample covariance matrices computed from data matrices for which both dimensions are large has recently attracted a lot of attention. In this paper we consider the following type of complex sample covariance matrices. Let X be an n × p matrix, and let its rows be i.i.d N C (0,… (More)

We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors with mean 0 and covariance Σp. We show that for a large… (More)

It has been recently shown that if X is an n × N matrix whose entries are i.i.d. standard complex Gaussian and l1 is the largest eigenvalue of X * X, there exist sequences mn,N and sn,N such that (l1 − mn,N)/sn,N converges in distribution to W2, the Tracy–Widom law appearing in the study of the Gaussian unitary ensemble. This probability law has a density… (More)

We study regression M-estimates in the setting where p, the number of covariates, and n, the number of observations, are both large, but p ≤ n. We find an exact stochastic representation for the distribution of β = argmin(β∈ℝ(p)) Σ(i=1)(n) ρ(Y(i) - X(i')β) at fixed p and n under various assumptions on the objective function ρ and our statistical model. A… (More)

We study the properties of solutions of quadratic programs with linear equality constraints whose parameters are estimated from data in the high-dimensional setting where p, the number of variables in the problem, is of the same order of magnitude as n, the number of observations used to estimate the parameters. The Markowitz problem in Finance is a subcase… (More)

We place ourselves in the setting of high-dimensional statistical inference, where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observations n. We consider the spectrum of certain kernel random matrices, in particular n × n matrices whose (i, j)-th entry is f (X i X j /p) or f (X i − X j 2 /p), where… (More)