Corpus ID: 218684433

An Analysis of the Adaptation Speed of Causal Models

@inproceedings{Priol2021AnAO,
  title={An Analysis of the Adaptation Speed of Causal Models},
  author={R{\'e}mi Le Priol and Reza Babanezhad Harikandeh and Yoshua Bengio and S. Lacoste-Julien},
  booktitle={AISTATS},
  year={2021}
}
We consider the problem of discovering the causal process that generated a collection of datasets. We assume that all these datasets were generated by unknown sparse interventions on a structural causal model (SCM) $G$, that we want to identify. Recently, Bengio et al. (2020) argued that among all SCMs, $G$ is the fastest to adapt from one dataset to another, and proposed a meta-learning criterion to identify the causal direction in a two-variable SCM. While the experiments were promising, the… Expand

Figures from this paper

Toward Causal Representation Learning
TLDR
Fundamental concepts of causal inference are reviewed and related to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. Expand
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
TLDR
A functional modular probing method is used to analyze deep model structures under OOD setting and demonstrates that even in biased models (which focus on spurious correlation) there still exist unbiased functional subnetworks. Expand
Prequential MDL for Causal Structure Learning with Neural Networks
TLDR
It is shown that the prequential minimum description length principle can be used to derive a practical scoring function for Bayesian networks when flexible and overparametrized neural networks are used to model the conditional probability distributions between observed variables. Expand
Inductive Biases for Deep Learning of Higher-Level Cognition
TLDR
This work considers a larger list of inductive biases that humans and animals exploit, focusing on those which concern mostly higher-level and sequential conscious processing, and suggests they could potentially help build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization. Expand

References

SHOWING 1-10 OF 44 REFERENCES
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
TLDR
This work proposes to meta-learn causal structures based on how fast a learner adapts to new distributions arising from sparse distributional changes, e.g. due to interventions, actions of agents and other sources of non-stationarities and shows that causal structures can be parameterized via continuous variables and learned end-to-end. Expand
Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions
TLDR
This work proposes an approach for solving causal domain adaptation problems that exploits causal inference and does not rely on prior knowledge of the causal graph, the type of interventions or the intervention targets, and demonstrates a possible implementation on simulated and real world data. Expand
Invariant Causal Prediction for Nonlinear Models
TLDR
This work presents and evaluates an array of methods for nonlinear and nonparametric versions of ICP for learning the causal parents of given target variables and finds that an approach which first fits a nonlinear model with data pooled over all environments and then tests for differences between the residual distributions across environments is quite robust across a large variety of simulation settings. Expand
Causal inference using invariant prediction: identification and confidence intervals
TLDR
This work proposes to exploit invariance of a prediction under a causal model for causal inference: given different experimental settings (for example various interventions) the authors collect all models that do show invariance in their predictive accuracy across settings and interventions and yields valid confidence intervals for the causal relationships in quite general scenarios. Expand
Causal Dantzig: fast inference in linear structural equation models with hidden variables under additive interventions
Causal inference is known to be very challenging when only observational data are available. Randomized experiments are often costly and impractical and in instrumental variable regression the numberExpand
Estimating Causal Direction and Confounding of Two Discrete Variables
We propose a method to classify the causal relationship between two discrete variables given only the joint distribution of the variables, acknowledging that the method is subject to an inherentExpand
Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks
TLDR
Empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. Expand
On causal and anticausal learning
TLDR
The problem of function estimation in the case where an underlying causal model can be inferred is considered, and a hypothesis for when semi-supervised learning can help is formulated, and corroborate it with empirical results. Expand
Exact Bayesian structure learning from uncertain interventions
We show how to apply the dynamic programming algorithm of Koivisto and Sood [KS04, Koi06], which computes the exact posterior marginal edge probabilities p(Gij = 1|D) of a DAG G given data D, to theExpand
Domain Adaptation under Target and Conditional Shift
TLDR
This work considers domain adaptation under three possible scenarios, kernel embedding of conditional as well as marginal distributions, and proposes to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Expand
...
1
2
3
4
5
...