Corpus ID: 237420513

Estimating the probabilities of causation via deep monotonic twin networks

@article{Vlontzos2021EstimatingTP,
  title={Estimating the probabilities of causation via deep monotonic twin networks},
  author={Athanasios Vlontzos and Bernhard Kainz and Ciar{\'a}n M Gilligan-Lee},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.01904}
}
There has been much recent work using machine learning to answer causal queries. Most focus on interventional queries, such as the conditional average treatment effect. However, as noted by Pearl, interventional queries only form part of a larger hierarchy of causal queries, with counterfactuals sitting at the top. Despite this, our community has not fully succeeded in adapting machine learning tools to answer counterfactual queries. This work addresses this challenge by showing how to… Expand

Figures and Tables from this paper

Causal Inference in AI Education: A Primer
The study of causal inference has seen recent momentum in machine learning and artificial intelligence (AI), particularly in the domains of transfer learning, reinforcement learning, automatedExpand

References

SHOWING 1-10 OF 44 REFERENCES
Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks
TLDR
Perfect Match is presented, a method for training neural networks for counterfactual inference that is easy to implement, compatible with any architecture, does not add computational complexity or hyperparameters, and extends to any number of treatments. Expand
Copy, paste, infer: A robust analysis of twin networks for counterfactual inference
Twin networks are a simple method for estimating counterfactuals, originally proposed to have several advantages over standard counterfactual inference. However, no study yet exists exploring in whatExpand
Integrating overlapping datasets using bivariate causal discovery
TLDR
This work adapt and extend elegant algorithms for discovering causal relations beyond conditional independence to the problem of learning consistent causal structures from multiple datasets with overlapping variables belonging to the same generating process, providing a sound and complete algorithm that outperforms previous approaches on synthetic and real data. Expand
Deep Counterfactual Networks with Propensity-Dropout
TLDR
This work proposes a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. Expand
Causal Effect Inference with Deep Latent-Variable Models
TLDR
This work builds on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect and shows its method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects. Expand
Learning Representations for Counterfactual Inference
TLDR
A new algorithmic framework for counterfactual inference is proposed which brings together ideas from domain adaptation and representation learning and significantly outperforms the previous state-of-the-art approaches. Expand
Estimating individual treatment effect: generalization bounds and algorithms
TLDR
A novel, simple and intuitive generalization-error bound is given showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalized-error of that representation and the distance between the treated and control distributions induced by the representation. Expand
Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes
TLDR
A novel multi- task learning framework in which factual and counterfactual outcomes are led as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS) and a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregion- alization kernel as a prior over the vvKHS is developed. Expand
Counterexample-Guided Learning of Monotonic Neural Networks
TLDR
This work develops a counterexample-guided technique to provably enforce monotonicity constraints at prediction time, and proposes a technique to usemonotonicity as an inductive bias for deep learning. Expand
Learning Functional Causal Models with Generative Neural Networks
We introduce a new approach to functional causal modeling from observational data, called Causal Generative Neural Networks (CGNN). CGNN leverages the power of neural networks to learn a generativeExpand
...
1
2
3
4
5
...