Corpus ID: 227127115

Balance Regularized Neural Network Models for Causal Effect Estimation

@article{Farajtabar2020BalanceRN,
  title={Balance Regularized Neural Network Models for Causal Effect Estimation},
  author={Mehrdad Farajtabar and Andrew Lee and Yuanjian Feng and Vishal Gupta and Peter Dolan and Harish Chandran and Martin Szummer},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.11199}
}
Estimating individual and average treatment effects from observational data is an important problem in many domains such as healthcare and e-commerce. In this paper, we advocate balance regularization of multi-head neural network architectures. Our work is motivated by representation learning techniques to reduce differences between treated and untreated distributions that potentially arise due to confounding factors. We further regularize the model by encouraging it to predict control outcomes… Expand
1 Citations

Figures and Tables from this paper

Matched sample selection with GANs for mitigating attribute confounding
TLDR
This work proposes a matching approach that selects a subset of images from the full dataset with balanced attribute distributions across protected attributes, and demonstrates the work in the context of gender bias in multiple open-source facial-recognition classifiers and finds that bias persists after removing key confounders via matching. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Causal Effect Inference with Deep Latent-Variable Models
TLDR
This work builds on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect and shows its method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects. Expand
Representation Learning for Treatment Effect Estimation from Observational Data
TLDR
A local similarity preserved individual treatment effect (SITE) estimation method based on deep representation learning that preserves local similarity and balances data distributions simultaneously, by focusing on several hard samples in each mini-batch. Expand
Adversarial Balancing-based Representation Learning for Causal Effect Inference with Observational Data
TLDR
Adversarial Balancing-based representation learning for Causal Effect Inference (ABCEI) uses adversarial learning to balance the distributions of treatment and control group in the latent representation space, without any assumption on the form of the treatment selection/assignment function. Expand
Deep Counterfactual Networks with Propensity-Dropout
TLDR
This work proposes a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. Expand
Learning Representations for Counterfactual Inference
TLDR
A new algorithmic framework for counterfactual inference is proposed which brings together ideas from domain adaptation and representation learning and significantly outperforms the previous state-of-the-art approaches. Expand
Transfer Learning for Estimating Causal Effects using Neural Networks
TLDR
New algorithms for estimating heterogeneous treatment effects, combining recent developments in transfer learning for neural networks with insights from the causal inference literature are developed, which can perform an order of magnitude better than existing benchmarks while using a fraction of the data. Expand
Matching on Balanced Nonlinear Representations for Treatment Effects Estimation
TLDR
This work converts counterfactual prediction as a classification problem, develops a kernel learning model with domain adaptation constraint, and design a novel matching estimator for observational data that reduces the dimension of covariates after projecting data to a low-dimensional subspace. Expand
Estimating individual treatment effect: generalization bounds and algorithms
TLDR
A novel, simple and intuitive generalization-error bound is given showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalized-error of that representation and the distance between the treated and control distributions induced by the representation. Expand
Deep representation learning for individualized treatment effect estimation using electronic health records
TLDR
A novel hybrid model bridging multi-task deep learning and K-nearest neighbors (KNN) for ITE estimation is proposed, which achieves competitive performance over state-of-the-art models and reveals several findings which are consistent with existing medical domain knowledge. Expand
Learning Counterfactual Representations for Estimating Individual Dose-Response Curves
TLDR
A novel machine-learning approach towards learning counterfactual representations for estimating individual dose-response curves for any number of treatments with continuous dosage parameters with neural networks is presented. Expand
...
1
2
3
4
5
...