Amortized Inference for Causal Structure Learning

@article{Lorch2022AmortizedIF,
  title={Amortized Inference for Causal Structure Learning},
  author={Lars Lorch and Scott Sussex and Jonas Rothfuss and Andreas Krause and Bernhard Scholkopf},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.12934}
}
Learning causal structure poses a combinatorial search problem that typically involves evaluating structures using a score or independence test. The resulting search is costly, and designing suitable scores or tests that capture prior knowledge is difficult. In this work, we propose to amortize the process of causal structure learning . Rather than searching over causal structures directly, we train a variational inference model to predict the causal structure from observational/interventional… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 91 REFERENCES
Differentiable Causal Discovery from Interventional Data
TLDR
This work proposes a neural network-based method for discovering causal relationships in data that can leverage interventional data and illustrates the flexibility of the continuous-constrained framework by taking advantage of expressive neural architectures such as normalizing flows.
Permutation-based Causal Inference Algorithms with Interventions
TLDR
These algorithms are interventional adaptations of the Greedy SP algorithm and are the first algorithms using both observational and interventional data with consistency guarantees and have the advantage that they are nonparametric, which makes them useful also for analyzing non-Gaussian data.
The max-min hill-climbing Bayesian network structure learning algorithm
TLDR
The first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other are presented, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search.
DAG-GNN: DAG Structure Learning with Graph Neural Networks
TLDR
A deep generative model is proposed and a variant of the structural constraint to learn the DAG is applied that learns more accurate graphs for nonlinearly generated samples; and on benchmark data sets with discrete variables, the learned graphs are reasonably close to the global optima.
GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods
TLDR
A novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GNW, which provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods.
Generating Realistic In Silico Gene Networks for Performance Assessment of Reverse Engineering Methods
TLDR
A method for generating biologically plausible in silico networks, which allow realistic performance assessment of network inference algorithms, instead of using random graph models, which are known to only partly capture the structural properties of biological networks.
Being Bayesian About Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks
TLDR
This paper shows how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables, and uses this result as the basis for an algorithm that approximates the Bayesian posterior of a feature.
A Linear Non-Gaussian Acyclic Model for Causal Discovery
TLDR
This work shows how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances.
Characterization and greedy learning of interventional Markov equivalence classes of directed acyclic graphs
TLDR
This paper gives a graph theoretic criterion for two DAGs being Markov equivalent under interventions and shows that each interventional Markov equivalence class can be uniquely represented by a chain graph called interventional essential graph (also known as CPDAG in the observational case).
DiBS: Differentiable Bayesian Structure Learning
TLDR
A general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation that is directly applicable to posterior inference of nonstandard Bayesian network models, e.g., with nonlinear dependencies encoded by neural networks.
...
...