• Corpus ID: 254854684

Model-robust and efficient covariate adjustment for cluster-randomized experiments

  title={Model-robust and efficient covariate adjustment for cluster-randomized experiments},
  author={Bingkai Wang and Chan Park and Dylan S. Small and Fan Li},
Cluster-randomized experiments are increasingly used to evaluate interventions in routine practice conditions, and researchers often adopt model-based methods with covariate adjustment in the statistical analyses. However, the validity of model-based covariate adjustment is unclear when the working models are misspecified, leading to ambiguity of estimands and risk of bias. In this article, we first adapt two conventional model-based methods, generalized estimating equations and linear mixed… 

Figures and Tables from this paper



Model‐assisted analyses of cluster‐randomized experiments

  • Fangzhou SuP. Ding
  • Economics, Computer Science
    Journal of the Royal Statistical Society: Series B (Statistical Methodology)
  • 2021
The asymptotic analysis reveals the efficiency‐robustness trade‐off by comparing the properties of various estimators using data at different levels with and without covariate adjustment and highlights the critical role of covariates in improving estimation efficiency.

Clarifying selection bias in cluster randomized trials

There is a need and opportunity to improve the analysis of cluster randomized trials that are subject to post-randomization selection bias.

Augmented generalized estimating equations for improving efficiency and validity of estimation in cluster randomized trials by leveraging cluster‐level and individual‐level covariates

This work modify one of these approaches, augmentation of standard estimators, for use within cluster randomized trials in which treatments are assigned to groups of individuals, thereby inducing correlation and demonstrates the potential for imbalance correction and efficiency improvement.

CRTgeeDR: an R Package for Doubly Robust Generalized Estimating Equations Estimations in Cluster Randomized Trials with Missing Data

A R package CRTgeeDR for estimating parameters in marginal regression in cluster randomized trials (CRTs) and the gains associated with the use of the DR for analyzing a binary outcome using a logit regression are demonstrated.

Inference for Cluster Randomized Experiments with Non-ignorable Cluster Sizes

This paper provides methods for inference in an asymptotic framework where the number of clusters tends to infinity and treatment is assigned using simple random sampling and permits the experimenter to sample only a subset of the units within each cluster rather than the entire cluster and demonstrates the implications of such sampling for some commonly used estimators.

Design-Based Ratio Estimators and Central Limit Theorems for Clustered, Blocked RCTs

Abstract This article develops design-based ratio estimators for clustered, blocked randomized controlled trials (RCTs), with an application to a federally funded, school-based RCT testing the

A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure

To flexibly and efficiently estimate the effect of a cluster-level exposure, two targeted maximum likelihood estimators (TMLEs) are presented and it is suggested that estimation under the sub-model can result in bias and misleading inference in an observational setting.

tmle : An R Package for Targeted Maximum Likelihood Estimation

Tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect.

Approximate inference in generalized linear mixed models

Statistical approaches to overdispersion, correlated errors, shrinkage estimation, and smoothing of regression relationships may be encompassed within the framework of the generalized linear mixed

Double/Debiased Machine Learning for Treatment and Structural Parameters

This work revisits the classic semiparametric problem of inference on a low dimensional parameter θ_0 in the presence of high-dimensional nuisance parameters η_0 and proves that DML delivers point estimators that concentrate in a N^(-1/2)-neighborhood of the true parameter values and are approximately unbiased and normally distributed, which allows construction of valid confidence statements.