• Corpus ID: 226246156

Causal Autoregressive Flows

@article{Khemakhem2021CausalAF,
  title={Causal Autoregressive Flows},
  author={Ilyes Khemakhem and Ricardo Pio Monti and Robert Leech and Aapo Hyv{\"a}rinen},
  journal={ArXiv},
  year={2021},
  volume={abs/2011.02268}
}
Two apparently unrelated fields -- normalizing flows and causality -- have recently received considerable attention in the machine learning community. In this work, we highlight an intrinsic correspondence between a simple family of flows and identifiable causal models. We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks. First, we show that… 

Figures and Tables from this paper

Efficient Causal Inference from Combined Observational and Interventional Data through Causal Reductions
TLDR
Adding observational data may help to more accurately estimate causal effects even in the presence of unobserved confounders, and is found that it can often substantially reduce the number of interventional samples when adding observational training samples without sacrificing accuracy.
Deep End-to-end Causal Inference
TLDR
The design principle of the method can generalize beyond DECI, providing a general End-to-end Causal Inference (ECI) recipe, which enables different ECI frameworks to be built using existing methods.
Optimal transport for causal discovery
TLDR
A novel dynamical-system view of FCMs is provided and a new framework for identifying causal direction in the bivariate case is proposed and a novel optimal transport-based algorithm for ANMs is proposed which is robust to the choice of models and extend it to post-nonlinear models.
VACA: Design of Variational Graph Autoencoders for Interventional and Counterfactual Queries
In this paper, we introduce VACA, a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are
Variational Flow Graphical Model
1This paper introduces a novel approach to embed flow-based models with hierarchical structures. The proposed framework is named Variational Flow Graphical (VFG) Model. VFGs learn the representation
Don’t Throw it Away! The Utility of Unlabeled Data in Fair Decision Making
TLDR
A novel method based on a variational autoencoder for practical fair decision-making that learns an unbiased data representation leveraging both labeled and unlabeled data and uses the representations to learn a policy in an online process and empirically validate that it converges to the optimal policy according to the ground-truth with low variance.
First do no harm: counterfactual objective functions for safe & ethical AI
To act safely and ethically in the real world, agents must be able to reason about harm and avoid harmful actions. In this paper we develop the first statistical definition of harm and a framework for
Counterfactual harm
TLDR
The first statistical definition of harm and a framework for incorporating harm into algorithmic decisions are developed, arguing that harm is fundamentally a counterfactual quantity and that standard machine learning algorithms that cannot performcounterfactual reasoning are guaranteed to pursue harmful policies in certain environments.
Personalized Public Policy Analysis in Social Sciences using Causal-Graphical Normalizing Flows
TLDR
Because IPW and RWR, like other traditional methods, lack the capability of counterfactual inference, c-GNFs will likely play a major role in tailoring personalized treatment, facilitating P 3 A, optimizing social interventions - in contrast to the current ‘one-size-fits-all’ approach of existing methods.
Causal discovery from multi-domain data using the independence of modularities
TLDR
A general framework for causal direction identification in the multi-domain data without assuming the specific causal mechanism and data types is proposed and verified in synthetic data and successfully identified the causal direction in two real-world datasets.
...
...

References

SHOWING 1-10 OF 50 REFERENCES
Causal Discovery with General Non-Linear Relationships using Non-Linear ICA
TLDR
It is shown rigorously that in the case of bivariate causal discovery, such non-linear ICA can be used to infer the causal direction via a series of independence tests, and an alternative measure of causal direction based on asymptotic approximations to the likelihood ratio is proposed.
Neural Autoregressive Flows
TLDR
It is demonstrated that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions.
Causal discovery with continuous additive noise models
TLDR
If the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions, which constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected.
Nonlinear causal discovery with additive noise models
TLDR
It is shown that the basic linear framework can be generalized to nonlinear models and, in this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-Generating mechanisms to be identified.
Graphical Normalizing Flows
TLDR
The graphical normalizing flow is proposed, a new invertible transformation with either a prescribed or a learnable graphical structure that provides a promising way to inject domain knowledge into normalizing flows while preserving both the interpretability of Bayesian networks and the representation capacity ofnormalizing flows.
A Linear Non-Gaussian Acyclic Model for Causal Discovery
TLDR
This work shows how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances.
On Estimation of Functional Causal Models
TLDR
This article shows that for any acyclic functional causal model, minimizing the mutual information between the hypothetical cause and the noise term is equivalent to maximizing the data likelihood with a flexible model for the distribution of the noiseTerm, and proposes a Bayesian nonparametric approach based on mutual information minimization.
Improved Variational Inference with Inverse Autoregressive Flow
TLDR
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors.
Causal discovery and inference: concepts and recent methodological advances
TLDR
The constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, is presented, and the assumptions underlying its validity are discussed.
Causal inference by using invariant prediction: identification and confidence intervals
TLDR
This work proposes to exploit invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) the authors collect all models that do show invariance in their predictive accuracy across settings and interventions, and yields valid confidence intervals for the causal relationships in quite general scenarios.
...
...