• Corpus ID: 219955853

Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data

  title={Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data},
  author={Sindy L{\"o}we and David Madras and Richard S. Zemel and Max Welling},
Standard causal discovery methods must fit a new model whenever they encounter samples from a new underlying causal graph. However, these samples often share relevant information - for instance, the dynamics describing the effects of causal relations - which is lost when following this approach. We propose Amortized Causal Discovery, a novel framework that leverages such shared dynamics to learn to infer causal relations from time-series data. This enables us to train a single, amortized model… 

Learning interaction rules from multi-animal trajectories via augmented behavioral models

This paper proposes a new framework for learning Granger causality from multi-animal trajectories via augmented theory-based behavioral models with interpretable data-driven models and adopts an approach for augmenting incomplete multi-agent behavioral models described by time-varying dynamical systems with neural networks.

Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise

This paper proposes a novel causal relationship learning framework for timeseries data, called Rhino, which combines vector auto-regression, deep learning and variational inference to model non-linear relationships with instantaneous effects while allowing the noise distribution to be modulated by historical observations.

Deep Causal Learning: Representation, Discovery and Inference

It is pointed out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence.

Learning Causal Discovery

It is argued that the causality should consider, where possible, a supervised approach, where CD procedures are learned from large datasets with known causal relations instead of being designed by a human specialist.

D’ya Like DAGs? A Survey on Structure Learning and Causal Discovery

This work primarily focuses on modern, continuous optimization methods, and provides reference to further resources such as benchmark datasets and software packages.

VICause: Simultaneous Missing Value Imputation and Causal Discovery with Groups

This work proposes VICAUSE, a novel approach to simultaneously tackle missing value imputation and causal discovery efficiently with deep learning, proposing a generative model with a structured latent space and a graph neural network-based architecture, scaling to large number of variables.

Causal discovery from conditionally stationary time-series

This work develops a causal discovery approach to a wide class of non-stationary time-series that are conditionally stationary that is able to recover the underlying causal dependencies, provably with fully-observed states and empirically with hidden states.

Causal Inference for Time series Analysis: Problems, Methods and Evaluation

This paper focuses on two causal inference tasks, i.e., treatment effect estimation and causal discovery for time series data and provides a comprehensive review of the approaches in each task and curates a list of commonly used evaluation metrics and datasets for each task.

Temporally Disentangled Representation Learning

TDRL is proposed, a principled framework to recover time-delayed latent causal variables and identify their relations from measured sequential data under stationary environments and under different distribution shifts and shows that this approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes.

DIDER: Discovering Interpretable Dynamically Evolving Relations

DIER, Discovering Interpretable Dynamically Evolving Relations, a generic end-to-end interaction modeling framework with intrinsic interpretability, is introduced and the experimental results demonstrate that modeling disentangled and interpretable dynamic relations improves performance on trajectory forecasting tasks.



Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality

This work makes a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes’ time series measurements.

Neural Granger Causality for Nonlinear Time Series

This work proposes a class of nonlinear methods by applying structured multilayer perceptrons (MLPs) or recurrent neural networks (RNNs) combined with sparsity-inducing penalties on the weights to extract the Granger causal structure.

Relating Graph Neural Networks to Structural Causal Models

A novel connection between GNN and SCM is established while providing an extended view on general neural-causal models and a new model class for GNN-based causal inference is established, necessary and sufficient for causal effect identification.

Amortized learning of neural causal representations

A novel algorithm is represented called CRN for learning causal models using neural networks that could scale much better with the number of variables and take in previously learned information to facilitate learning of new causal models.

Causal Discovery in Physical Systems from Videos

A model that considers the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure to discover the structural dependencies among environmental and object variables and can correctly identify the interactions from a short sequence of images and make long-term future predictions.

Information Bottleneck for Estimating Treatment Effects with Systematically Missing Covariates

This paper trains an information bottleneck to perform a low-dimensional compression of covariates by explicitly considering the relevance of information for treatment effects and can reliably and accurately estimate treatment effects even in the absence of a full set of covariate information at test time.

Causal Discovery from Multiple Data Sets with Non-Identical Variable Sets

This paper proposes a principled method to uniquely identify causal relationships over the integrated set of variables from multiple data sets, in linear, non-Gaussian cases, and presents two types of approaches to parameter estimation.

Discovering Nonlinear Relations with Minimum Predictive Information Regularization

This work introduces a novel minimum predictive information regularization method to infer directional relations from time series, allowing deep learning models to discover nonlinear relations.

Integrating overlapping datasets using bivariate causal discovery

This work adapt and extend elegant algorithms for discovering causal relations beyond conditional independence to the problem of learning consistent causal structures from multiple datasets with overlapping variables belonging to the same generating process, providing a sound and complete algorithm that outperforms previous approaches on synthetic and real data.