# Causal Entropy Optimization

@article{Branchini2022CausalEO, title={Causal Entropy Optimization}, author={Nicola Branchini and Virginia Aglietti and Neil Dhir and Theodoros Damoulas}, journal={ArXiv}, year={2022}, volume={abs/2208.10981} }

We study the problem of globally optimizing the causal effect on a target variable of an unknown causal graph in which interventions can be performed. This problem arises in many areas of science including biology, operations research and healthcare. We propose Causal Entropy Optimization ( CEO ), a framework which generalizes Causal Bayesian Optimization ( CBO ) [2] to account for all sources of uncertainty, including the one arising from the causal graph structure. CEO incorporates the causal…

## One Citation

### Model-based Causal Bayesian Optimization

- Computer ScienceArXiv
- 2022

The model-based causal Bayesian optimization (MCBO) is proposed, an algorithm that learns a full system model instead of only modeling intervention-reward pairs and bound its cumulative regret, and obtains the first non-asymptotic bounds for CBO.

## References

SHOWING 1-10 OF 70 REFERENCES

### Entropic Causal Inference: Graph Identifiability

- Computer ScienceICML
- 2022

This work first extends the causal graph identifiability result in the two-variable setting under relaxed assumptions, and shows the first identifiable result using the entropic approach for learning causal graphs with more than two nodes.

### Actively Identifying Causal Effects with Latent Variables Given Only Response Variable Observable

- EconomicsNeurIPS
- 2021

In many real tasks, it is generally desired to study the causal effect on a speciﬁc target (response variable) only, with no need to identify the thorough causal effects involving all variables. In…

### Differentiable Causal Discovery Under Latent Interventions

- Computer ScienceCLeaR
- 2022

This work proposes a method based on neural networks and variational inference that addresses a scenario with an extensive dataset sampled from multiple intervention distributions and one observation distribution, but where the interventions are entirely latent.

### Dynamic Causal Bayesian Optimization

- Computer ScienceNeurIPS
- 2021

This paper gives theoretical results detailing how one can transfer interventional information across time steps and defines a dynamic causal GP model which can be used to quantify uncertainty and find optimal interventions in practice.

### Causal Bandits with Unknown Graph Structure

- Computer ScienceNeurIPS
- 2021

The regret guarantees of the algorithms greatly improve upon those of standard multi-armed bandit algorithms under mild conditions and it is proved that mild conditions are necessary: without them one cannot do better than standard MAB algorithms.

### Bayesian Optimal Experimental Design for Inferring Causal Structure

- Computer Science
- 2021

A novel Bayesian method that can be implemented by computing simple summaries of the current posterior, avoiding the computationally burdensome task of repeatedly performing posterior inference on hypothetical future datasets drawn from the posterior predictive is proposed.

### Bayesian Model Averaging for Causality Estimation and its Approximation based on Gaussian Scale Mixture Distributions

- Computer Science, EconomicsAISTATS
- 2021

This paper shows from a Bayesian perspective that it is Bayes optimal to weight (average) the causal effects estimated under each model rather than estimating the causal effect under a fixed single model, and develops an approximation to the Bayesian optimal estimator by using Gaussian scale mixture distributions.

### Estimating Conditional Mutual Information for Discrete-Continuous Mixtures using Multi-Dimensional Adaptive Histograms

- Computer ScienceSDM
- 2021

CMI for such mixture variables, defined based on the Radon-Nikodym derivate, can be written as a sum of entropies, just like CMI for purely discrete or continuous data, by learning an adaptive histogram model.

### Applications of Common Entropy for Causal Inference

- Computer ScienceNeurIPS
- 2020

The problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent is studied and an iterative algorithm that can be used to discover the trade-off between the entropy of the latent variable and the conditional mutual information of the observed variables is proposed.

### Multi-task Causal Learning with Gaussian Processes

- Computer ScienceNeurIPS
- 2020

This paper proposes the first multi-task causal Gaussian process (GP) model, which it is called DAG-GP, that allows for information sharing across continuous interventions and across experiments on different variables and achieves the best fitting performance in a variety of real and synthetic settings.