Semiparametric efficient empirical higher order influence function estimators

  title={Semiparametric efficient empirical higher order influence function estimators},
  author={Rajarshi Mukherjee and Whitney Newey and James M. Robins},
  journal={arXiv: Statistics Theory},
Robins et al. (2008, 2016b) applied the theory of higher order infuence functions (HOIFs) to derive an estimator of the mean of an outcome Y in a missing data model with Y missing at random conditional on a vector X of continuous covariates; their estimator, in contrast to previous estimators, is semiparametric efficient under minimal conditions. However the Robins et al. (2008, 2016b) estimator depends on a non-parametric estimate of the density of X. In this paper, we introduce a new HOIF… 
On efficiency of the plug-in principle for estimating smooth integrated functionals of a nonincreasing density
We consider the problem of estimating smooth integrated functionals of a monotone nonincreasing density $f$ on $[0,\infty)$ using the nonparametric maximum likelihood based plug-in estimator. We find
Balancing Out Regression Error: Efficient Treatment Effect Estimation without Smooth Propensities
There has been a recent surge of interest in doubly robust approaches to treatment effect estimation in observational studies, driven by a realization that they can be combined with modern machine
Cross-fitting and fast remainder rates for semiparametric estimation
There are many interesting and widely used estimators of a functional with ?nite semi-parametric variance bound that depend on nonparametric estimators of nuisance func-tions. We use cross-?tting to
Augmented minimax linear estimation
Many statistical estimands can expressed as continuous linear functionals of a conditional expectation function. This includes the average treatment effect under unconfoundedness and generalizations
On Nearly Assumption-Free Tests of Nominal Confidence Interval Coverage for Causal Parameters Estimated by Machine Learning
For many causal effect parameters of interest, doubly robust machine learning (DRML) estimators ψ^1 are the state-of-the-art, incorporating the good prediction performance of machine learning; the
On assumption-free tests and confidence intervals for causal effects estimated by machine learning
Tests that can have the power to detect whether the bias of $\widehat\psi_1$ is of the same or even larger order than its standard error of order $n^{-1/2}$, can provide a lower confidence limit on the degree of under coverage of the interval and strikingly, are valid under essentially no assumptions.
Estimation of Smooth Functionals of Location Parameter in Gaussian and Poincaré Random Shift Models
Let E be a separable Banach space and let $f:E\mapsto {\mathbb {R}}$ be a smooth functional. We discuss a problem of estimation of f(𝜃) based on an observation X = 𝜃 + ξ, where 𝜃 ∈ E is an
Optimal rates of entropy estimation over Lipschitz balls
This work constructs entropy estimators that attain the minimax rate of convergence, shown optimal by matching lower bounds by a novel application of the Hardy-Littlewood maximal inequality.
The Balancing Act in Causal Inference
The idea of covariate balance is at the core of causal inference. Inverse propensity weights play a central role because they are the unique set of weights that balance the covariate distributions of
On Estimation of L{r}-Norms in Gaussian White Noise Models
We provide a complete picture of asymptotically minimax estimation of $L_r$-norms (for any $r\ge 1$) of the mean in Gaussian white noise model over Nikolskii-Besov spaces. In this regard, we


Higher order influence functions and minimax estimation of nonlinear functionals
We present a theory of point and interval estimation for nonlinear functionals in parametric, semi-, and non-parametric models based on higher order influence functions (Robins (2004), Section 9; Li
Minimax estimation of a functional on a structured high-dimensional model
We introduce a new method of estimation of parameters in semi-parametric and nonparametric models. The method is based on estimating equations that are U-statistics in the observations. The
Toward a curse of dimensionality appropriate (CODA) asymptotic theory for semi-parametric models.
It is shown that any statistician who ignores the randomization probabilities is unable to construct nominal 95 per cent confidence intervals for the true treatment effect, and proposes a curse of dimensionality appropriate (CODA) asymptotic theory for inference in non- and semi-parametric models.
Some new asymptotic theory for least squares series: Pointwise and uniform results
In this work we consider series estimators for the conditional mean in light of three new ingredients: (i) sharp LLNs for matrices derived from the non-commutative Khinchin inequalities, (ii) bounds
Asymptotic Normality of Quadratic Estimators.
It is shown that estimators are asymptotically normal even in the case that the rate is slower than the square root of the observations, which is illustrated by estimation of the integral of a square of a density or regression function.
Oracle inequalities for multi-fold cross validation
We consider choosing an estimator or model from a given class by cross validation consisting of holding a nonneglible fraction of the observations out as a test set. We derive bounds that show that
Convergence rates and asymptotic normality for series estimators
Abstract This paper gives general conditions for convergence rates and asymptotic normality of series estimators of conditional expectations, and specializes these conditions to polynomial regression
Random Vectors in the Isotropic Position
Abstract Letybe a random vector in R n, satisfying E y⊗y=id. LetMbe a natural number and lety1, …, yMbe independent copies ofy. We study the question of approximation of the identity operator by
Asymptotic normality of quadratic
  • 2016
Semi-parametric Models. Statistics in medicine
  • 2008