On The So-Called “Huber Sandwich Estimator” and “Robust Standard Errors”

@article{Freedman2006OnTS,
  title={On The So-Called “Huber Sandwich Estimator” and “Robust Standard Errors”},
  author={David Freedman},
  journal={The American Statistician},
  year={2006},
  volume={60},
  pages={299 - 302}
}
  • D. Freedman
  • Published 1 November 2006
  • Mathematics
  • The American Statistician
The “Huber Sandwich Estimator” can be used to estimate the variance of the MLE when the underlying model is incorrect. If the model is nearly correct, so are the usual standard errors, and robustification is unlikely to help much. On the other hand, if the model is seriously in error, the sandwich may help on the variance side, but the parameters being estimated by the MLE are likely to be meaningless—except perhaps as descriptive statistics. 
Model-Robust Regression and a Bayesian `Sandwich' Estimator
TLDR
The derivation provides a compelling Bayesian justification for using the Huber―White sandwich estimator, and it also clarifies what is being estimated when the data-generating mechanism is not linear.
RISK OF BAYESIAN INFERENCE IN MISSPECIFIED MODELS, AND THE SANDWICH COVARIANCE MATRIX
It is well known that, in misspecified parametric models, the maximum likelihood estimator (MLE) is consistent for the pseudo-true value and has an asymptotically normal sampling distribution with
Bayesian Heteroskedasticity-Robust Standard Errors
Use of heteroskedasticity-robust standard errors has become common in frequentist regressions. I offer here a Bayesian analog. The Bayesian version is derived by first focusing on the likelihood
Misspecification of the covariance structure in generalized linear mixed models
TLDR
It is shown that the differences or the ratios between the naive and sandwich standard deviations of the fixed effects estimators provide convenient means of assessing the fit of the model, as both are consistent when the covariance structure is correctly specified, but only the latter is when that structure is misspecified.
Models as Approximations - A Conspiracy of Random Regressors and Model Deviations Against Classical Inference in Regression
Abstract. More than thirty years ago Halbert White inaugurated a “modelrobust” form of statistical inference based on the “sandwich estimator” of standard error. It is asymptotically correct even
The Conspiracy of Random Predictors and Model Violations against Classical Inference in Regression
xed, White permits models to be \misspecied" and predictors to be random. Careful reading of his theory shows that it is a synergistic eect | a \conspiracy" | of nonlinearity and randomness of the
Models as Approximations — A Conspiracy of Random Regressors and Model Misspecification Against Classical Inference in Regression
Abstract. More than thirty years ago Halbert White inaugurated a “modelrobust” form of statistical inference based on the “sandwich estimator” of standard error. This estimator is known to be
Models as Approximations: How Random Predictors and Model Violations Invalidate Classical Inference in Regression
We review and interpret the early insights of Halbert White who over thirty years ago inaugurated a form of statistical inference for regression models that is asymptotically correct even under
A Note on "How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It"
King and Roberts (2015, KR) claim that a disagreement between robust and classical standard errors exposes model misspecification. We emphasize that KR's claim only generally applies to parametric
Misspecified Discrete Choice Models and Huber-White Standard Errors
I analyze properties of misspecified discrete choice models and the efficacy of Huber-White (sometimes called ‘robust’) standard errors. The Huber-White correction provides asymptotically correct
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 24 REFERENCES
Estimation, inference, and specification analysis
  • H. White
  • Computer Science, Mathematics
  • 1993
TLDR
The underlying motivation for maximum-likelihood estimation is explored, the interpretation of the MLE for misspecified probability models is treated, and the conditions under which parameters of interest can be consistently estimated despite misspecification are given.
A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal
The behavior of maximum likelihood estimates under nonstandard conditions
This paper proves consistency and asymptotic normality of maximum likelihood (ML) estimators under weaker conditions than usual. In particular, (i) it is not assumed that the true distribution
Elements of large-sample theory
This introductory book on the most useful parts of large-sample theory is designed to be accessible to scientists outside statistics and certainly to master’s-level statistics students who ignore
Statistical Assumptions as Empirical Commitments
Researchers who study punishment and social control, like those who study other social phenomena, typically seek to generalize their findings from the data they have to some larger context: in
Government Partisanship, Labor Organization, and Macroeconomic Performance: A Corrigendum
Alvarez, Garrett and Lange (1991) used cross-national data panel data on the Organization for Economic Coordination and Development nations to show that countries with left governments and
Linear statistical inference and its applications
Algebra of Vectors and Matrices. Probability Theory, Tools and Techniques. Continuous Probability Models. The Theory of Least Squares and Analysis of Variance. Criteria and Methods of Estimation.
The Effect of Information on Voter Turnout: Evidence from a Natural Experiment
Do better informed people vote more? Recent theories of voter turnout emphasize a positive effect of being informed on the propensity to vote, but the possibility of endogenous information
Statistical Models: Theory and Practice
TLDR
This paper presents a meta-modelling framework for estimating the bootstrap values of multiple regression models based on data from Observational studies and experiments and a comparison of these models with real-world data.
Theory of point estimation
TLDR
This paper presents a meta-analyses of large-sample theory and its applications in the context of discrete-time reinforcement learning, which aims to clarify the role of reinforcement learning in the reinforcement-gauging process.
...
1
2
3
...