Minimax and Adaptive Inference in Nonparametric Function Estimation

  title={Minimax and Adaptive Inference in Nonparametric Function Estimation},
  author={T. Tony Cai},
  journal={Statistical Science},
  • T. Cai
  • Published 1 February 2012
  • Mathematics
  • Statistical Science
Since Stein's 1956 seminal paper, shrinkage has played a fundamental role in both parametric and nonparametric inference. This article discusses minimaxity and adaptive minimaxity in nonparametric function estimation. Three interrelated problems, function estimation under global integrated squared error, estimation under pointwise squared error, and nonparametric confidence intervals, are considered. Shrinkage is pivotal in the development of both the minimax theory and the adaptation theory… 

Paper Mentions

Adaptive Inference in Multivariate Nonparametric Regression Models Under Monotonicity
We consider the problem of adaptive inference on a regression function at a point under a multivariate nonparametric regression setting. The regression function belongs to a Holder class and is
On shrinking minimax convergence in nonparametric statistics
‘ … if we are prepared to assume that the unknown density has k derivatives, then … the optimal mean integrated squared error is of order n−2 k/(2 k+1) … ’ The citation is from Silverman [(1986),
From multiple Gaussian sequences to functional data and beyond: a Stein estimation approach
It is shown that the simultaneous recovery is adaptive to an oracle strategy, which also enjoys a robustness guarantee in a minimax sense, and extended the model projection to general bases with mild conditions on correlation structure and conclude with potential application to other statistical problems.
Improved minimax estimation of a multivariate normal mean under heteroscedasticity
Consider the problem of estimating a multivariate normal mean with a known variance matrix, which is not necessarily proportional to the identity matrix. The coordinates are shrunk directly in
Minimax Estimation of Discrete Distributions Under $\ell _{1}$ Loss
This work provides tight upper and lower bounds on the maximum risk of the empirical distribution, and the minimax risk in regimes where the support size S may grow with the number of observations n, and shows that a hard-thresholding estimator oblivious to the unknown upper bound H, is essentially minimax.
Minimax estimation of discrete distributions
This work provides non-asymptotic upper and lower bounds on the maximum risk of the empirical distribution, and the minimax risk in regimes where the alphabet size S may grow with the number of observations n, and a hard-thresholding estimator, whose threshold does not depend on the unknown upper bound H, is asymptotically minimax.
Minimax Estimation of Functionals of Discrete Distributions
The minimax rate-optimal mutual information estimator yielded by the framework leads to significant performance boosts over the Chow-Liu algorithm in learning graphical models and the practical advantages of the schemes for the estimation of entropy and mutual information.
Minimax estimation of large precision matrices with bandable Cholesky factor
Last decade witnesses significant methodological and theoretical advances in estimating large precision matrices. In particular, there are scientific applications such as longitudinal data,
Nonparametric principal subspace regression
In scientific applications, multivariate observations often come in tandem with temporal or spatial covariates, with which the underlying signals vary smoothly. The standard approaches such as
Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
For the important classical problem of inference on a sparse high-dimensional normal mean vector, we propose a novel empirical Bayes model that admits a posterior distribution with desirable


Optimal adaptive estimation of a quadratic functional
Minimax mean-squared error estimates of quadratic functionals of smooth functions have been constructed for a variety of smoothness classes. In contrast to many nonparametric function estimation
Superefficiency in Nonparametric Function Estimation
Fixed parameter asymptotic statements are often used in the context of nonparametric curve estimation problems (e.g., nonparametric density or regression estimation). In this context several forms of
General empirical Bayes wavelet methods and exactly adaptive minimax estimation
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true
Sharp adaptive estimation by a blockwise method
We consider a blockwise James–Stein estimator for nonparametric function estimation in suitable wavelet or Fourier bases. The estimator can be readily explained and implemented. We show that the
Renormalization and White Noise Approximation for Nonparametric Functional Estimation Problems
where Wt is Brownian motion. In Section 2, we show how invariance ideas can often clarify rates of convergence results as n -X cc for a variety of parameter spaces F. Extensions and generalisations
Sharp adaptation for inverse problems with random noise
Abstract. We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones.
Statistical Estimation and Optimal Recovery
New formulas are given for the minimax linear risk in estimating a linear functional of an unknown object from indirect data contaminated with random Gaussian noise. The formulas cover a variety of
Trade-offs between global and local risks in nonparametric function estimation
The problem of loss adaptation is investigated: given a fixed parameter, the goal is to construct an estimator that adapts to the loss function in the sense that the estimator is optimal both
Modulation Estimators and Confidence Sets
An unknown signal plus white noise is observed at n discretetime points. Within a large convex class of linear estimators of the signal, we choose the one which minimizes estimated quadratic risk. By
Adaptive estimation of linear functionals under different performance measures
zi being independent and identically distributed standard normal random variables and M a finite or countably infinite index set. In particular, for these models minimax theory for mean squared