# Relating Graph Neural Networks to Structural Causal Models

@article{Zecevic2021RelatingGN, title={Relating Graph Neural Networks to Structural Causal Models}, author={M. Zecevic and Devendra Singh Dhami and Petar Velickovic and Kristian Kersting}, journal={ArXiv}, year={2021}, volume={abs/2109.04173} }

Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations. For most processes of interest the underlying SCM will only be partially observable, thus causal inference tries to leverage any exposed information. Graph neural networks (GNN) as universal approximators on structured input pose a viable candidate for causal learning, suggesting a tighter integration with SCM. To this effect we present a…

## 13 Citations

### On the Tractability of Neural Causal Inference

- Computer ScienceArXiv
- 2021

It is proved that SPN-based causal inference is generally tractable, opposed to standard MLPbased NCM, and a new tractable NCM-class is introduced that is efficient in inference and fully expressive in terms of Pearl’s Causal Hierarchy.

### A Taxonomy for Inference in Causal Model Families

- Computer Science
- 2021

The impossibility result alongside the taxonomy for tractability in causal models can raise awareness for this novel research direction since achieving success with causality in real world downstream tasks will not only depend on learning correct models as it also require having the practical ability to gain access to model inferences.

### Deconfounded Training for Graph Neural Networks

- Computer ScienceArXiv
- 2021

This work revisits the GNNmodeling from the causal perspective and presents a new paradigm of deconfounded training (DTP) that better mitigates the confounding effect and latches on the critical information, to enhance the representation and generalization ability.

### Diffusion Causal Models for Counterfactual Estimation

- Computer ScienceCLeaR
- 2022

The proposed Diff-SCM is a deep structural causal model that builds on recent advances of generative energy-based models and produces more realistic and minimal counterfactuals than baselines on MNIST data and can also be applied to ImageNet data.

### Deep Causal Learning: Representation, Discovery and Inference

- Computer ScienceArXiv
- 2022

It is pointed out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence.

### Causal Attention for Interpretable and Generalizable Graph Classification

- Computer ScienceKDD
- 2022

The Causal Attention Learning (CAL) strategy is proposed, which discovers the causal patterns and mitigates the confounding effect of shortcuts in graph classification by employing attention modules to estimate the causal and shortcut features of the input graph.

### VACA: Designing Variational Graph Autoencoders for Causal Queries

- Computer ScienceAAAI
- 2022

In this paper, we introduce VACA, a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are…

### Neural Causal Models for Counterfactual Identification and Estimation

- Computer ScienceArXiv
- 2022

This paper shows that neural causal models (NCMs) are expressive enough and encode the structural constraints necessary for performingcounterfactual reasoning, and develops an algorithm for simultaneously identifying and estimating counterfactual distributions.

### VACA: Design of Variational Graph Autoencoders for Interventional and Counterfactual Queries

- Computer ScienceArXiv
- 2021

In this paper, we introduce VACA, a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are…

### B-MEG: Bottlenecked-Microservices Extraction Using Graph Neural Networks

- Computer ScienceICPE
- 2022

Preliminary analysis shows that the framework, B-MEG, produces promising results, especially for applications with complex call graphs, and the effectiveness of Graph Neural Network models in detecting bottlenecks is explored.

## References

SHOWING 1-10 OF 51 REFERENCES

### Deep Structural Causal Models for Tractable Counterfactual Inference

- Computer ScienceNeurIPS
- 2020

The experimental results indicate that the proposed framework can successfully train deep SCMs that are capable of all three levels of Pearl's ladder of causation: association, intervention, and counterfactuals, giving rise to a powerful new approach for answering causal questions in imaging applications and beyond.

### The Causal-Neural Connection: Expressiveness, Learnability, and Inference

- Computer ScienceNeurIPS
- 2021

Leveraging the neural toolbox, an algorithm is developed that is both sufficient and necessary to determine whether a causal effect can be learned from data and then estimates the effect whenever identifiability holds (causal estimation).

### Neural Relational Inference for Interacting Systems

- Computer Science, PhysicsICML 2018
- 2018

The NRI model is introduced: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data, in the form of a variational auto-encoder.

### Graph Attention Networks

- Computer ScienceICLR
- 2018

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior…

### Time Series Deconfounder: Estimating Treatment Effects over Time in the Presence of Hidden Confounders

- Economics, MathematicsICML
- 2020

A method that leverages the assignment of multiple treatments over time to enable the estimation of treatment effects in the presence of multi-cause hidden confounders and a theoretical analysis for obtaining unbiased causal effects of time-varying exposures using the Time Series Deconfounder.

### Statistics and Causal Inference

- Philosophy
- 1985

Abstract Problems involving causal inference have dogged at the heels of statistics since its earliest days. Correlation does not imply causation, and yet causal conclusions drawn from a carefully…

### Variational Graph Auto-Encoders

- Computer ScienceArXiv
- 2016

The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.

### Representation Learning via Invariant Causal Mechanisms

- Computer ScienceICLR
- 2021

A novel self-supervised objective, Representation Learning via Invariant Causal Mechanisms (ReLIC), is proposed that enforces invariant prediction of proxy targets across augmentations through an invariance regularizer which yields improved generalization guarantees.

### Neural Message Passing for Quantum Chemistry

- Computer ScienceICML
- 2017

Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.

### Causal Inference in Statistics: A Primer

- Computer Science
- 2016

Judea Pearl presents a book ideal for beginners in statistics, providing a comprehensive introduction to the field of causality, with examples from classical statistics presented throughout to demonstrate the need for causality in resolving decision-making dilemmas posed by data.