• Publications
  • Influence
Nonlinear causal discovery with additive noise models
TLDR
It is shown that the basic linear framework can be generalized to nonlinear models and, in this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-Generating mechanisms to be identified.
Causal inference by using invariant prediction: identification and confidence intervals
TLDR
This work proposes to exploit invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) the authors collect all models that do show invariance in their predictive accuracy across settings and interventions, and yields valid confidence intervals for the causal relationships in quite general scenarios.
Counterfactual reasoning and learning systems: the example of computational advertising
This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such
Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks
TLDR
Empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions.
Kernel-based Conditional Independence Test and Application in Causal Discovery
TLDR
A Kernel-based Conditional Independence test (KCI-test) is proposed, by constructing an appropriate test statistic and deriving its asymptotic distribution under the null hypothesis of conditional independence.
Causal discovery with continuous additive noise models
TLDR
If the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions, which constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected.
CAM: Causal Additive Models, high-dimensional order search and penalized regression
TLDR
This work substantially simplify the problem of structure search and estimation for an important class of causal models by establishing consistency of the (restricted) maximum likelihood estimator for low- and high-dimensional scenarios, and allowing for misspecification of the error distribution.
Identifiability of Gaussian structural equation models with equal error variances
TLDR
This work proves full identifiability in the case where all noise variables have the same variance: the directed acyclic graph can be recovered from the joint Gaussian distribution.
On causal and anticausal learning
TLDR
The problem of function estimation in the case where an underlying causal model can be inferred is considered, and a hypothesis for when semi-supervised learning can help is formulated, and corroborate it with empirical results.
...
1
2
3
4
5
...