• Corpus ID: 239009781

Causal Identification with Additive Noise Models: Quantifying the Effect of Noise

  title={Causal Identification with Additive Noise Models: Quantifying the Effect of Noise},
  author={Benjamin Kap and Marharyta Aleksandrova and Thomas Engel},
In recent years, a lot of research has been conducted within the area of causal inference and causal learning. Many methods have been developed to identify the cause-effect pairs in models and have been successfully applied to observational real-world data to determine the direction of causal relationships. Yet in bivariate situations, causal discovery problems remain challenging. One class of such methods, that also allows tackling the bivariate case, is based on Additive Noise Models (ANMs… 

Figures and Tables from this paper


Nonlinear causal discovery with additive noise models
It is shown that the basic linear framework can be generalized to nonlinear models and, in this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-Generating mechanisms to be identified.
Regression by dependence minimization and its application to causal inference in additive noise models
This work proposes a novel method for regression that minimizes the statistical dependence between regressors and residuals, and proposes an algorithm for efficiently inferring causal models from observational data for more than two variables.
Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks
Empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions.
Causal discovery with continuous additive noise models
If the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions, which constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected.
On Causal Discovery with Cyclic Additive Noise Models
It is proved that the causal graph of cyclic causal models is generically identifiable in the bivariate, Gaussian-noise case and a method to learn such models from observational data is proposed.
Inference of Cause and Effect with Unsupervised Inverse Regression
This work addresses the problem of causal discovery in the two-variable case, given a sample from their joint distribution and proposes an implicit notion of independence, namely that pY|X cannot be estimated based on pX (lower case denotes density), however, it may be possible to estimate pY |X based on the density of the effect, pY.
A Linear Non-Gaussian Acyclic Model for Causal Discovery
This work shows how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances.
Lingam: Non-Gaussian Methods for Estimating Causal Structures
This paper provides an overview of recent developments in causal inference, and focuses in particular on the non-Gaussian methods known as LiNGAM.
On the Identifiability of the Post-Nonlinear Causal Model
It is shown that this post-nonlinear causal model is identifiable in most cases; by enumerating all possible situations in which the model is not identifiable, this model is identified by sufficient conditions for its identifiability.
Causal Inference Using Nonnormality
Path analysis, often applied to observational data to study causal structures, describes causal relationship between observed variables. The path analysis is of confirmatory nature and can make