• Corpus ID: 211677952

On Minimax Exponents of Sparse Testing

@article{Mukherjee2020OnME,
  title={On Minimax Exponents of Sparse Testing},
  author={Rajarshi Mukherjee and Subhabrata Sen},
  journal={arXiv: Statistics Theory},
  year={2020}
}
We consider exact asymptotics of the minimax risk for global testing against sparse alternatives in the context of high dimensional linear regression. Our results characterize the leading order behavior of this minimax risk in several regimes, uncovering new phase transitions in its behavior. This complements a vast literature characterizing asymptotic consistency in this problem, and provides a useful benchmark, against which the performance of specific tests may be compared. Finally, we… 

Figures from this paper

HIGH DIMENSIONAL ASYMPTOTICS OF LIKELIHOOD RATIO TESTS IN THE GAUSSIAN SEQUENCE MODEL UNDER CONVEX CONSTRAINTS

Under the null hypothesis, normal approximation holds for the log-likelihood ratio statistic for a general pair (μ0,K), in the high dimensional regime where the estimation error of the associated least squares estimator diverges in an appropriate sense.

Estimation of the ℓ2-norm and testing in sparse linear regression with unknown variance

We consider the related problems of estimating the $l_2$-norm and the squared $l_2$-norm in sparse linear regression with unknown variance, as well as the problem of testing the hypothesis that the

Consistency of invariance-based randomization tests

  • Edgar Dobriban
  • Computer Science, Mathematics
    The Annals of Statistics
  • 2022
A general framework and a set of results on the consistency of invariancebased randomization tests in signal-plus-noise models are developed and compared with minimax lower bounds, it is found perhaps surprisingly that in some cases,randomization tests detect signals at the minimax optimal rate.

CONSISTENCY OF INVARIANCE-BASED RANDOMIZATION TESTS: SIGNAL-PLUS-NOISE MODELS

A general framework and a set of results on the consistency of invariance-based randomization tests in signal-plus-noise models are developed and it is found perhaps surprisingly that in some cases,randomization tests detect signals at the minimax optimal rate.

Near-Optimal Procedures for Model Discrimination with Non-Disclosure Properties

This work provides matching upper and lower bounds on the sample complexity as given by min{1/ ∆2,√r/∆} up to a constant factor; here ∆ is a measure of separation between P0 and P1 and r is the rank of the design covariance matrix.

Near-Optimal Model Discrimination with Non-Disclosure

This work provides matching upper and lower bounds on the sample complexity of the general parametric setup in asymptotic regime for generalized linear models in the small-sample regime and under weak moment assumptions, and derives sample complexity bounds of a similar form, even under misspecification.

References

SHOWING 1-10 OF 79 REFERENCES

Variable selection with Hamming loss

Non-asymptotic bounds for the minimax risk of variable selection under expected Hamming loss in the Gaussian mean model in R^d are derived and data-driven selectors that provide almost full and exact recovery adaptively to the parameters of the classes are proposed.

Optimal detection of sparse principal components in high dimension

The minimax optimal test is based on a sparse eigenvalue statistic, and a computationally efficient alternative test using convex relaxations is described, which is proved to detect sparse principal components at near optimal detection levels and performs well on simulated datasets.

Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism

Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of

A Comparison of the Lasso and Marginal Regression

This paper compares the conditions under which the lasso and marginal regression guarantee exact recovery with high probability in the fixed design, noise free, random coefficients case, and derives rates of convergence for both procedures.

Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso)

  • M. Wainwright
  • Computer Science
    IEEE Transactions on Information Theory
  • 2009
This work analyzes the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern of a vector beta* based on observations contaminated by noise, and establishes precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n.

Optimal Variable Selection and Adaptive Noisy Compressed Sensing

This work develops its adaptive version, and proposes a robust variant of the method to handle datasets with outliers and heavy-tailed distributions of observations, which is near optimal, adaptive to all parameters of the problem and also robust.

Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators

It is proved that simultaneously the thresholded Lasso and Dantzig estimators with a proper choice of the threshold enjoy a sign concentration property provided that the non-zero components of the target vector are not too small.

The Sparse Poisson Means Model

The problem of detecting a sparse Poisson mixture is considered and a form of higher criticism achieves the detection boundary in the whole sparse regime when the Poisson means are smaller than logarithmic in the sample size.

Adaptive estimation of the sparsity in the Gaussian vector model

This work derives a new way of assessing the optimality of a sparsity estimator and exhibits such an optimal procedure for the Gaussian vector model with mean value {\theta).

Nearly unbiased variable selection under minimax concave penalty

It is proved that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO.
...