How Much Should We Trust Differences-in-Differences Estimates?

  title={How Much Should We Trust Differences-in-Differences Estimates?},
  author={Marianne Bertrand and Esther Duflo and Sendhil Mullainathan},
  journal={Experimental \& Empirical Studies eJournal},
Most Difference-in-Difference (DD) papers rely on many years of data and focus on serially correlated outcomes. Yet almost all these papers ignore the bias in the estimated standard errors that serial correlation introduce4s. This is especially troubling because the independent variable of interest in DD estimation (e.g., the passage of law) is itself very serially correlated, which will exacerbate the bias in standard errors. To illustrate the severity of this issue, we randomly generate… 

How Much Should We Trust Staggered Difference-In-Differences Estimates?

Difference-in-differences analysis with staggered treatment timing is frequently used to assess the impact of policy changes on corporate outcomes in academic research. However, recent advances in

Robust Standard Error Estimation in Fixed-Effects Panel Models

The paper focuses on standard error estimation in FE models if there is serial correlation in the error process. Applied researchers have often ignored the problem, probably because major statistical

Trusting Difference-in-Differences Estimates More: An Approximate Permutation Test

In economics and business, policy researchers often use observational data and difference-in-differences estimation to test the effectiveness of a policy change. In this context, the model

Estimating causal effects: considering three alternatives to difference-in-differences estimation

While DiD produces unbiased estimates when the parallel trends assumption holds, the alternative approaches provide less biased estimates of treatment effects when it is violated, and the LDV approach produces the most efficient and least biased estimates.


Recognizing that cross-sectional data are often insufficient to address the identification problems associated with estimating the effect of government taxation or spending, economists engaged in

Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.

When treatment and comparison groups differed on pre-intervention levels or trends, the results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference.

Measurement Errors in Investment Equations

This work uses Monte Carlo simulations and real data to assess the performance of alternative methods that deal with measurement error in investment equations, and provides guidance for dealing with the problem of measurement error under circumstances empirical researchers are likely to find in practice.

Should We Adjust for the Test for Pre-trends in Difference-in-Difference Designs?

The common practice in difference-in-difference (DiD) designs is to check for parallel trends prior to treatment assignment, yet typical estimation and inference does not account for the fact that

A Negative Correlation Strategy for Bracketing in Difference-in-Differences with Application to the Effect of Voter Identification Laws on Voter Turnout

The method of difference-in-differences (DID) is widely used to study the causal effect of policy interventions in observational studies. DID exploits a before and after comparison of the treated and

Inference in Differences-in-Differences: How Much Should We Trust in Independent Clusters?

The conditions in which ignoring spatial correlation is problematic for inference in differences-in-differences models are analyzed to provide a better understanding on when spatial correlation should be more problematic, and important guidelines on how to minimize inference problems due to spatial correlation are provided.



Robust Standard Error Estimation in Fixed-Effects Panel Models

The paper focuses on standard error estimation in FE models if there is serial correlation in the error process. Applied researchers have often ignored the problem, probably because major statistical

Problems with Instrumental Variables Estimation when the Correlation between the Instruments and the Endogenous Explanatory Variable is Weak

Abstract We draw attention to two problems associated with the use of instrumental variables (IV), the importance of which for empirical work has not been fully appreciated. First, the use of

Unnatural Experiments? Estimating the Incidence of Endogenous Policies

The US federal system provides great potential for estimating the effects of policy on behavior. There are numerous empirical studies that exploit variation in policies over space and time. In

Cluster-Sample Methods in Applied Econometrics

Inference methods that recognize the clustering of individual observations have been available for more than 25 years. Brent Moulton (1990) caught the attention of economists when he demonstrated the

Estimating Autocorrelations in Fixed-Effects Models

Nickell's method of correcting for the inconsistency of autocorrelation estimators is extended by generalizing to higher than first-order autOCorrelations and to error processes other than first -order autoregressions.

An Illustration of a Pitfall in Estimating the Effects of Aggregate Variables on Micro Unit

Many economic researchers have attempted to measure the effect of aggregate market or public policy variables on micro units by merging aggregate data with micro observations by industry, occupation,

What We Know and Do Not Know About the Natural Rate of Unemployment

Over the past three decades, a large amount of research has attempted to identify the determinants of the natural rate of unemployment. It is this body of work we assess in this paper. We reach two

Hodges-Lehmann Point Estimates of Treatment Effect in Observational Studies

Abstract A Hodges-Lehmann point estimate of an additive treatment effect is a robust estimate derived from the randomization distribution of a rank test. This article shows how to carry out a

6 Observational studies and nonrandomized experiments

  • P. Rosenbaum
  • Environmental Science
    Design and analysis of experiments
  • 1996