Treatment Effect Risk: Bounds and Inference

@article{Kallus2022TreatmentER,
  title={Treatment Effect Risk: Bounds and Inference},
  author={Nathan Kallus},
  journal={2022 ACM Conference on Fairness, Accountability, and Transparency},
  year={2022}
}
  • Nathan Kallus
  • Published 15 January 2022
  • Economics
  • 2022 ACM Conference on Fairness, Accountability, and Transparency
Since the average treatment effect (ATE) measures the change in social welfare, even if positive, there is a risk of negative effect on, say, some 10% of the population. Assessing such risk is difficult, however, because any one individual treatment effect (ITE) is never observed so the 10% worst-affected cannot be identified, while distributional treatment effects only compare the first deciles within each treatment group, which does not correspond to any 10%-subpopulation. In this paper we… 

Figures from this paper

What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment
The fundamental problem of causal inference – that we never observe counterfactuals – prevents us from identifying how many might be negatively affected by a proposed intervention. If, in an A/B test,

References

SHOWING 1-10 OF 52 REFERENCES
Program evaluation and causal inference with high-dimensional data
TLDR
This paper shows that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced form functional parameters, and provides results on honest inference for (function-valued) parameters within this general framework where any high-quality, modern machine learning methods can be used to learn the nonparametric/high-dimensional components of the model.
Randomization inference for treatment effect variation
TLDR
A model-free approach to testing for heterogeneity beyond a given model, which can be useful for assessing the sufficiency of a given scientific theory, is proposed and applied to the National Head Start impact study, finding that there is indeed significant unexplained treatment effect variation.
Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds
TLDR
It is proved how to nonetheless point-identify quantities under the additional assumption of monotone treatment response, which may be reasonable in many applications and provided a sensitivity analysis for this assumption by means of sharp partial-identification bounds under violations of monotonicity of varying strengths.
Optimal doubly robust estimation of heterogeneous causal effects
TLDR
A two-stage doubly robust CATE estimator is studied and a generic model-free error bound is given and it is shown that this estimator can be oracle efficient under even weaker conditions, if used with a specialized form of sample splitting and careful choices of tuning parameters.
Decomposing Treatment Effect Variation
ABSTRACT Understanding and characterizing treatment effect variation in randomized experiments has become essential for going beyond the “black box” of the average treatment effect. Nonetheless,
Minimax-Optimal Policy Learning Under Unobserved Confounding
TLDR
It is demonstrated that hidden confounding can hinder existing policy-learning approaches and lead to unwarranted harm although the robust approach guarantees safety and focuses on well-evidenced improvement, a necessity for making personalized treatment policies learned from observational data reliable in practice.
Estimating treatment effect heterogeneity in randomized program evaluation
TLDR
This paper proposes a method that adapts the Support Vector Machine classifier by placing separate sparsity constraints over the pre-treatment parameters and causal heterogeneity parameters of interest, and selects the most effective voter mobilization strategies from a large number of alternative strategies.
Recursive partitioning for heterogeneous causal effects
TLDR
This paper provides a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects, and proposes an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation.
Who should be Treated? Empirical Welfare Maximization Methods for Treatment Choice
TLDR
It is shown that when the propensity score is known, the average social welfare attained by EWM rules converges at least at n^(-1/2) rate to the maximum obtainable welfare uniformly over a minimally constrained class of data distributions, and this uniform convergence rate is minimax optimal.
Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding
TLDR
Double validity is an entirely new property for partial identification: DVDS estimators still provide valid, though not sharp, bounds even when most nuisance parameters are misspecified, even in cases when DVDS point estimates fail to be asymptotically normal, standard Wald confidence intervals may remain valid.
...
...