• Corpus ID: 224803633

Concentration of solutions to random equations with concentration of measure hypotheses

@article{Louart2020ConcentrationOS,
  title={Concentration of solutions to random equations with concentration of measure hypotheses},
  author={Cosme Louart and Romain Couillet},
  journal={arXiv: Probability},
  year={2020}
}
We propose here to study the concentration of random objects that are implicitly formulated as fixed points to equations Y = f (X) where f is a random mapping. Starting from an hypothesis taken from the concentration of the measure theory, we are able to express precisely the concentration of such solutions, under some contractivity hypothesis on f. This statement has important implication to random matrix theory, and is at the basis of the study of some optimization procedures like the… 

Tyler's and Maronna's M-estimators: Non-Asymptotic Concentration Results

This work derives the derivation of tight non-asymptotic concentration bounds of Tyler and Maronna's M-estimators around a suitably scaled version of the data sample covariance matrix.

The Unexpected Deterministic and Universal Behavior of Large Softmax Classifiers

It is found that, when the Softmax classifier is trained on data satisfying loose statistical modeling assumptions, its weights become deterministic and solely depend on the data statistical means and covariances, thereby disrupting the intu-ition that non-linearities inherently extract advanced statistical features from the data.

Deciphering Lasso-based Classification Through a Large Dimensional Analysis of the Iterative Soft-Thresholding Algorithm

A theoretical analysis of a Lasso-based classification algorithm based on an original novel analysis of the Iterative Soft-Thresholding Algorithm (ISTA), which may be of independent interest beyond the particular problem studied here and may be adapted to similar iterative schemes.

Quantitative deterministic equivalent of sample covariance matrices with a general dependence structure

A new bound for the convergence in Kolmogorov distance of the empirical spectral distributions of these general models is obtained and this framework is applied to the problem of regularization of Random Features models in Machine Learning without Gaussian hypothesis.

References

SHOWING 1-10 OF 19 REFERENCES

Concentration of Measure and Large Random Matrices with an application to Sample Covariance Matrices

An original framework for random matrix analysis based on revisiting the concentration of measure theory for random vectors is provided, which finds a large range of applications in statistical learning and beyond, starting with the capacity to easily analyze the performance of artificial neural networks and random feature maps.

On robust regression with high-dimensional predictors

A nonlinear system of two deterministic equations that characterizes ρ is discovered that depends on ρ through proximal mappings of ρ as well as various aspects of the statistical model underlying this study.

Concentration of measure and isoperimetric inequalities in product spaces

The concentration of measure phenomenon in product spaces roughly states that, if a set A in a product ΩN of probability spaces has measure at least one half, “most” of the points of Ωn are “close”

A Concentration of Measure and Random Matrix Approach to Large Dimensional Robust Statistics

This article exploits this semi-metric along with concentration of measure arguments to prove the existence and uniqueness of the robust estimator as well as evaluate its limiting spectral distribution.

A Large Scale Analysis of Logistic Regression: Asymptotic Performance and New Insights

This paper considers the "hard" classification problem of separating high dimensional Gaussian vectors, where the data dimension p and the sample size n are both large and evaluates the asymptotic distribution of the logistic regression classifier and consequently, provides the associated classification performance.

Topics in Random Matrix Theory

The field of random matrix theory has seen an explosion of activity in recent years, with connections to many areas of mathematics and physics. However, this makes the current state of the field

Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures

This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called \textit{concentrated}

The concentration of measure phenomenon

Concentration functions and inequalities Isoperimetric and functional examples Concentration and geometry Concentration in product spaces Entropy and concentration Transportation cost inequalities

LARGE SAMPLE COVARIANCE MATRICES WITHOUT INDEPENDENCE STRUCTURES IN COLUMNS

The limiting spectral distribution of large sample covariance matrices is derived under dependence conditions. As applications, we obtain the limiting spectral distributions of Spearman's rank

Fonctions de möbius - formule de rota

  • 2006