• Corpus ID: 251928949

Confounder Selection: Objectives and Approaches

@inproceedings{Guo2022ConfounderSO,
  title={Confounder Selection: Objectives and Approaches},
  author={F. Richard Guo and Anton Rask Lundborg and Qingyuan Zhao},
  year={2022}
}
: Confounder selection is perhaps the most important step in the design of observational studies. A number of criteria, often with different objectives and approaches, have been proposed, and their validity and practical value have been debated in the literature. Here, we provide a unified review of these criteria and the assumptions behind them. We list several objectives that confounder selection methods aim to achieve and discuss the amount of structural knowledge required by different… 

Figures from this paper

References

SHOWING 1-10 OF 65 REFERENCES

Invited commentary: variable selection versus shrinkage in the control of multiple confounders.

It appears that statistical confounder selection may be an unnecessary complication in most regression analyses of effects because theory and simulation evidence have found no selection method to be uniformly superior to adjusting for all well-measured confounds.

Principles of confounder selection

This paper puts forward a practical approach to confounder selection decisions when the somewhat less stringent assumption is made that knowledge is available for each covariate whether it is a cause of the exposure, and whetherIt is aCause of the outcome.

Methodological Challenges in Causal Research on Racial and Ethnic Patterns of Cognitive Trajectories: Measurement, Selection, and Bias

A number of common biases that can obscure causal relationships, including confounding, measurement ceilings/floors, baseline adjustment bias, practice or retest effects, differential measurement error, conditioning on common effects in direct and indirect effects decompositions, are summarized.

On model selection and model misspecification in causal inference

It is demonstrated that certain strategies for inferring causal effects have the desirable features of producing (approximately) valid confidence intervals, even when the confounder-selection process is ignored, and of being robust against certain forms of misspecification of the association of confounders with both exposure and outcome.

Outcome modelling strategies in epidemiology: traditional methods and basic alternatives

This work reviews several traditional modelling strategies, including stepwise regression and the ‘change-in-estimate’ (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure, and provides some basic alternatives and refinements that do not require special macros or programming.

The effect of misclassification in the presence of covariates.

The effects of misclassification on analyses involving a discrete covariate are examined and it is shown that biased and unbiased mis classification will tend to distort the degree of heterogeneity in the measure of association being considered.

Confounder selection strategies targeting stable treatment effect estimators

The ability of the proposed confounder selection strategy to correctly select confounders, and to ensure valid inference of the treatment effect following data-driven covariate selection, is assessed empirically and compared with existing methods using simulation studies.

A New Criterion for Confounder Selection

If any subset of the observed covariates suffices to control for confounding then the set of covariates chosen by the criterion will also suffice, and it is shown that other criteria for confounding control do not have this property.

Discussion of “Data‐driven confounder selection via Markov and Bayesian networks” by Häggström

An alternative way to represent counterfactual causal models with graphs is discussed, different approaches for selecting sets to adjust for are discussed, and some comments on the specific data example are made.

Effects of adjusting for instrumental variables on bias and precision of effect estimates.

The results indicate that effect estimates which are conditional on a perfect IV or near-IV may have larger bias and variance than the unconditional estimate, however, in most scenarios considered, the increases in error due to conditioning were small compared with the total estimation error.
...