#### Filter Results:

- Full text PDF available (20)

#### Publication Year

2010

2017

- This year (4)
- Last 5 years (17)
- Last 10 years (23)

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Tom Claassen, Joris M. Mooij, Tom Heskes
- UAI
- 2013

This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N 2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that… (More)

- Tom Claassen, Tom Heskes
- UAI
- 2012

We target the problem of accuracy and robustness in causal inference from finite data sets. Our idea is to combine the inherent robustness of Bayesian approaches to causal structure discovery, such as GES, with the theoretical strength and clarity of constraint-based methods such as IC and PC/FCI. We obtain probability estimates on the input statements in a… (More)

- Tom Claassen, Tom Heskes
- UAI
- 2011

We present a novel approach to constraint-based causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial… (More)

- Elena Sokolova, Perry Groot, +5 authors Jan Buitelaar
- PloS one
- 2016

BACKGROUND
Numerous factor analytic studies consistently support a distinction between two symptom domains of attention-deficit/hyperactivity disorder (ADHD), inattention and hyperactivity/impulsivity. Both dimensions show high internal consistency and moderate to strong correlations with each other. However, it is not clear what drives this strong… (More)

- Tom Claassen, Tom Heskes
- NIPS
- 2010

A long-standing open research problem is how to use information from different experiments, including background knowledge, to infer causal relations. Recent developments have shown ways to use multiple data sets, provided they originate from identical experiments. We present the MCI-algorithm as the first method that can infer provably valid causal… (More)

- Elena Sokolova, Perry Groot, Tom Claassen, Tom Heskes
- Probabilistic Graphical Models
- 2014

Bayesian Constraint-based Causal Discovery (BCCD) is a state-of-the-art method for robust causal discovery in the presence of latent variables. It combines probabilistic estimation of Bayesian networks over subsets of variables with a causal logic to infer causal statements. Currently BCCD is limited to discrete or Gaussian variables. Most of the real-world… (More)

- Sara Magliacane, Tom Claassen, Joris M. Mooij
- NIPS
- 2016

Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing approaches can still be greatly… (More)

- Tom Claassen, Tom Heskes
- IJCAI
- 2013

We target the problem of accuracy and robustness in causal inference from finite data sets. Our aim is to combine the inherent robustness of the Bayesian approach with the theoretical strength and clarity of constraint-based methods. We use a Bayesian score to obtain probability estimates on the input statements used in a constraint-based procedure. These… (More)

- Tom Claassen, Tom Heskes
- 2010

We tackle the problem of how to use information from multiple (in)dependence models, representing results from different experiments, including background knowledge, in causal discovery. We introduce the framework of a causal system in an external context to derive a connection between strict conditional independencies and causal relations between… (More)

- Tom Claassen, Tom Heskes
- ESANN
- 2011

We present two inference rules, based on so called minimal conditional independencies, that are sufficient to find all invariant arrow-heads in a single causal DAG, even when selection bias may be present. It turns out that the set of seven graphical orientation rules that are usually employed to identify these arrowheads are, in fact, just different… (More)