• Corpus ID: 232417072

Compositional Abstraction Error and a Category of Causal Models

@inproceedings{Rischel2021CompositionalAE,
  title={Compositional Abstraction Error and a Category of Causal Models},
  author={Eigil Fjeldgren Rischel and Sebastian Weichwald},
  booktitle={UAI},
  year={2021}
}
Interventional causal models describe joint distributions over some variables used to describe a system, one for each intervention setting. They provide a formal recipe for how to move between joint distributions and make predictions about the variables upon intervening on the system. Yet, it is difficult to formalise how we may change the underlying variables used to describe the system, say from fine-grained to coarse-grained variables. Here, we argue that compositionality is a desideratum… 

References

SHOWING 1-10 OF 49 REFERENCES
Approximate Causal Abstraction
TLDR
This work shows how the resulting account handles the discrepancy that can arise between low- and high-level causal models of the same system, and provides an account of how one causal model approximates another, a topic of independent interest.
A new metric for probability distributions
We introduce a metric for probability distributions, which is bounded, information-theoretically motivated, and has a natural Bayesian interpretation. The square root of the well-known /spl chi//sup
Abstracting Causal Models
TLDR
It is shown that procedures for combining micro-variables into macro-Variables are instances of the notion of strong abstraction, as are all the examples considered by Rubenstein et al. (2017), which takes more seriously all potential interventions in a model, not just the allowed interventions.
The algebra and machine representation of statistical models
TLDR
This dissertation takes steps toward digitizing and systematizing two major artifacts of data science, statistical models and data analyses, by designing and implementing a software system for creating machine representations of data analyses in the form of Python or R programs.
Causal Feature Learning for Utility-Maximizing Agents
TLDR
A new technique, pragmatic causal feature learning (PCFL), is proposed, which extends the original CFL algorithm in useful and intuitive ways and has the same attractive measure-theoretic properties as the original NHL algorithm.
Pragmatism and Variable Transformations in Causal Modelling
Categories for the Working Mathematician
I. Categories, Functors and Natural Transformations.- 1. Axioms for Categories.- 2. Categories.- 3. Functors.- 4. Natural Transformations.- 5. Monics, Epis, and Zeros.- 6. Foundations.- 7. Large
Estimating Functions of Distributions from A Finite Set of Samples, Part 2: Bayes Estimators for Mutual Information, Chi-Squared, Covariance and other Statistics
TLDR
Finite sample estimators for entropy and other functions of a discrete probability distribution when the data is a finite sample drawn from that probability distribution are presented.
A Probability Monad as the Colimit of Spaces of Finite Samples
We define and study a probability monad on the category of complete metric spaces and short maps. It assigns to each space the space of Radon probability measures on it with finite first moment,
...
...