• Corpus ID: 238856989

Treatment Effect Detection with Controlled FDR under Dependence for Large-Scale Experiments

@inproceedings{Bao2021TreatmentED,
  title={Treatment Effect Detection with Controlled FDR under Dependence for Large-Scale Experiments},
  author={Yihan Bao and Shichao Han and Yong Wang},
  year={2021}
}
  • Yihan Bao, Shichao Han, Yong Wang
  • Published 14 October 2021
  • Mathematics
Online controlled experiments (also known as A/B Testing) have been viewed as a golden standard for large data-driven companies since the last few decades. The most common A/B testing framework adopted by many companies use "average treatment effect" (ATE) as statistics. However, it remains a difficult problem for companies to improve the power of detecting ATE while controlling "false discovery rate" (FDR) at a predetermined level. One of the most popular FDR-control algorithms is Benjamini… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 24 REFERENCES
Conditional calibration for false discovery rate control under dependence
We introduce a new class of methods for finite-sample false discovery rate (FDR) control in multiple testing problems with dependent test statistics where the dependence is fully or partially known.
False Discovery Rate Controlled Heterogeneous Treatment Effect Detection for Online Controlled Experiments
TLDR
This paper proposes statistical methods that can systematically and accurately identify Heterogeneous Treatment Effect (HTE) of any user cohort of interest, and determines which factors contribute to the heterogeneity of the treatment effect in an A/B test.
Multiple testing with the structure‐adaptive Benjamini–Hochberg algorithm
  • Ang Li, R. Barber
  • Mathematics
    Journal of the Royal Statistical Society: Series B (Statistical Methodology)
  • 2018
In multiple testing problems, where a large number of hypotheses are tested simultaneously, false discovery rate (FDR) control can be achieved with the well-known Benjamini-Hochberg procedure, which
A practical guide to methods controlling false discoveries in computational biology
TLDR
This work investigated the accuracy, applicability, and ease of use of two classic and six modern FDR-controlling methods by performing a systematic benchmark comparison using simulation studies as well as six case studies in computational biology.
THE CONTROL OF THE FALSE DISCOVERY RATE IN MULTIPLE TESTING UNDER DEPENDENCY
Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR
Controlling the false discovery rate via knockoffs
In many fields of science, we observe a response variable together with a large number of potential explanatory variables, and would like to be able to discover which variables are truly associated
Covariate powered cross-weighted multiple testing with false discovery rate control
Consider a large-scale multiple testing setup where we observe pairs $((P_i, X_i))_{1\leq i \leq m}$ of p-values $P_i$ and covariates $X_i$, such that $P_i \perp X_i$ under the null hypothesis. Our
Controlling the false discovery rate: a practical and powerful approach to multiple testing
SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to
The positive false discovery rate: a Bayesian interpretation and the q-value
Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false
Statistical inference in two-stage online controlled experiments with treatment selection and validation
TLDR
This paper proposes a general methodology for combining the first screening stage data together with validation stage data for more sensitive hypothesis testing and more accurate point estimation of the treatment effect.
...
1
2
3
...