Corpus ID: 2776940

Inferring deterministic causal relations

@inproceedings{Daniusis2010InferringDC,
  title={Inferring deterministic causal relations},
  author={Povilas Daniusis and Dominik Janzing and Joris M. Mooij and Jakob Zscheischler and Bastian Steudel and Kun Zhang and Bernhard Sch{\"o}lkopf},
  booktitle={UAI},
  year={2010}
}
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the… Expand

Figures, Tables, and Topics from this paper

Testing whether linear equations are causal: A free probability theory approach
TLDR
This work proposes a method that infers whether linear relations between two high-dimensional variables X and Y are due to a causal influence from X to Y or from Y to X and describes a statistical test and argues that both causal directions are typically rejected if there is a common cause. Expand
The Randomized Causation Coefficient
TLDR
This short paper proposes to learn how to perform causal inference directly from data, without the need of feature engineering, and poses causality as a kernel mean embedding classification problem, where inputs are samples from arbitrary probability distributions on pairs of random variables, and labels are types of causal relationships. Expand
A Bayesian Model for Bivariate Causal Inference
TLDR
Bayesian Causal Inference (BCI), a novel inference method which assumes a generative Bayesian hierarchical model to pursue the strategy of Bayesian model selection, performs generally reliably with synthetic data as well as with the real world TCEP benchmark set, with an accuracy comparable to state-of-the-art algorithms. Expand
Inferring Causal Direction from Relational Data
TLDR
This work examines the task of inferring the causal direction of peer dependence in relational data and demonstrates the efficacy of the proposed methods with synthetic experiments, and provides a theoretical characterization on the identifiability of direction. Expand
Learning Causal Structures Using Regression Invariance
TLDR
A notion of completeness for a causal inference algorithm in this setting is defined and an alternate algorithm is presented that has significantly improved computational and sample complexity compared to the baseline algorithm. Expand
Telling cause from effect in deterministic linear dynamical systems
TLDR
This work proposes a new approach based on the hypothesis that nature chooses the "cause" and the "mechanism generating the effect from the cause" independently of each other, and describes mathematical assumptions in a deterministic model under which the causal direction is identifiable. Expand
Analysis of Cause-Effect Inference via Regression Errors
TLDR
This work addresses the problem of inferring the causal relation between two variables by comparing the least-squares errors of the predictions in both possible causal directions by providing an easily applicable algorithm. Expand
Learning Independent Causal Mechanisms
TLDR
This work develops an algorithm to recover a set of independent (inverse) mechanisms from a sets of transformed data points, based on aset of experts that compete for data generated by the mechanisms, driving specialization. Expand
Cause-Effect Inference by Comparing Regression Errors
TLDR
This work addresses the problem of inferring the causal relation between two variables by comparing the least-squares errors of the predictions in both possible causal directions and provides an easily applicable method that only requires a regression in both Possible causal directions. Expand
Inference of Cause and Effect with Unsupervised Inverse Regression
TLDR
This work addresses the problem of causal discovery in the two-variable case, given a sample from their joint distribution and proposes an implicit notion of independence, namely that pY|X cannot be estimated based on pX (lower case denotes density), however, it may be possible to estimate pY |X based on the density of the effect, pY. Expand
...
1
2
3
4
5
...

References

SHOWING 1-9 OF 9 REFERENCES
Distinguishing between cause and effect
  • J. Mooij, D. Janzing
  • Mathematics, Computer Science
  • NIPS Causality: Objectives and Assessment
  • 2010
TLDR
Eight data sets that together formed the CauseEffectPairs task in the Causality Challenge #2: Pot-Luck competition are described and baseline results using three different causal inference methods are presented. Expand
Telling cause from effect based on high-dimensional observations
TLDR
The method applies to both stochastic and deterministic causal relations, provided that the dimensionality is sufficiently high (in some experiments, 5 was enough). Expand
Nonlinear causal discovery with additive noise models
TLDR
It is shown that the basic linear framework can be generalized to nonlinear models and, in this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-Generating mechanisms to be identified. Expand
Causal Inference Using the Algorithmic Markov Condition
TLDR
This work explains why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. Expand
Causation, prediction, and search
What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control ourExpand
Causal Models as Minimal Descriptions of Multivariate Systems
By applying the minimality principle for model selection, one should seek the model that describes the data by a code of minimal length. Learning is viewed as data compression that exploits theExpand
On the Identifiability of the Post-Nonlinear Causal Model
TLDR
It is shown that this post-nonlinear causal model is identifiable in most cases; by enumerating all possible situations in which the model is not identifiable, this model is identified by sufficient conditions for its identifiability. Expand
Estimating mutual information.
TLDR
Two classes of improved estimators for mutual information M(X,Y), from samples of random points distributed according to some joint probability density mu(x,y), based on entropy estimates from k -nearest neighbor distances are presented. Expand
Information geometry on hierarchy of probability distributions
  • S. Amari
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 2001
TLDR
The orthogonal decomposition of an exponential family or mixture family of probability distributions has a natural hierarchical structure is given and is important for extracting intrinsic interactions in firing patterns of an ensemble of neurons and for estimating its functional connections. Expand