Treatment Effect Detection with Controlled FDR under Dependence for Large-Scale Experiments
@inproceedings{Bao2021TreatmentED, title={Treatment Effect Detection with Controlled FDR under Dependence for Large-Scale Experiments}, author={Yihan Bao and Shi-Feng Han and Yong Wang}, year={2021} }
Online controlled experiments (also known as A/B Testing) have been viewed as a golden standard for large data-driven companies since the last few decades. The most common A/B testing framework adopted by many companies use "average treatment effect" (ATE) as statistics. However, it remains a difficult problem for companies to improve the power of detecting ATE while controlling "false discovery rate" (FDR) at a predetermined level. One of the most popular FDR-control algorithms is Benjamini…
References
SHOWING 1-10 OF 24 REFERENCES
Conditional calibration for false discovery rate control under dependence
- Computer Science, MathematicsThe Annals of Statistics
- 2022
A new class of methods for finite-sample false discovery rate (FDR) control in multiple testing problems with dependent test statistics where the dependence is fully or partially known is introduced, including the dependence-adjusted Benjamini-Hochberg procedure, which adaptively thresholds the q-value for each hypothesis.
False Discovery Rate Controlled Heterogeneous Treatment Effect Detection for Online Controlled Experiments
- Computer ScienceKDD
- 2018
This paper proposes statistical methods that can systematically and accurately identify Heterogeneous Treatment Effect (HTE) of any user cohort of interest, and determines which factors contribute to the heterogeneity of the treatment effect in an A/B test.
A practical guide to methods controlling false discoveries in computational biology
- MedicinebioRxiv
- 2018
This work investigates the accuracy, applicability, and ease of use of two classic and six modern FDR-controlling methods by performing a systematic benchmark comparison using simulation studies as well as six case studies in computational biology.
Multiple testing with the structure‐adaptive Benjamini–Hochberg algorithm
- Computer ScienceJournal of the Royal Statistical Society: Series B (Statistical Methodology)
- 2018
The main theoretical result proves that the SABHA method controls the FDR at a level that is at most slightly higher than the target FDR level, as long as the adaptive weights are constrained sufficiently so as not to overfit too much to the data.
Controlling the false discovery rate via knockoffs
- Computer Science
- 2015
The knockoff filter is introduced, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables, and empirical results show that the resulting method has far more power than existing selection rules when the proportion of null variables is high.
Covariate powered cross-weighted multiple testing with false discovery rate control
- Computer Science
- 2017
Its asymptotic characteristics are viewed through the lens of the conditional two-groups model, while favorable finite-sample properties are achieved by cross-weighting, a novel data-splitting approach that enables learning the weight-covariate function without overfitting.
Controlling the false discovery rate: a practical and powerful approach to multiple testing
- Business
- 1995
SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to…
The positive false discovery rate: a Bayesian interpretation and the q-value
- Computer Science
- 2003
This work introduces a modified version of the FDR called the “positive false discovery rate” (pFDR), which can be written as a Bayesian posterior probability and can be connected to classification theory.
Statistical inference in two-stage online controlled experiments with treatment selection and validation
- BusinessWWW
- 2014
This paper proposes a general methodology for combining the first screening stage data together with validation stage data for more sensitive hypothesis testing and more accurate point estimation of the treatment effect.
Whiteout: when do fixed-X knockoffs fail?
- Computer Science
- 2021
This work recast the fixed-X knockoff filter for the Gaussian linear model as a conditional post-selection inference method, and obtains the first negative results that universally upper-bound the power of all fixed- X knockoff methods, without regard to choices made by the analyst.