• Corpus ID: 14003512

Flattening network data for causal discovery : What could wrong ?

  title={Flattening network data for causal discovery : What could wrong ?},
  author={Marc E. Maier and Katerina Marazopoulou and David T. Arbour and David Jensen},
Methods for learning causal dependencies from observational data have been the focus of decades of work in social science, statistics, machine learning, and philosophy [9, 10, 11]. Much of the theoretical and practical work on causal discovery has focused on propositional representations. Propositional models effectively represent individual directed causal dependencies (e.g., path analysis, Bayesian networks) or conditional distributions of some outcome variable (e.g., linear regression… 

Figures from this paper

Non-Parametric Inference of Relational Dependence
A consistent, non-parametric, scalable kernel test is proposed to operationalize the relational independence test for non-i.i.d. observational data under a set of structural assumptions and is empirically evaluated on a variety of synthetic and semi-synthetic networks.
Causal Discovery for Relational Domains: Representation, Reasoning, and Learning
  • M. Maier
  • Computer Science, Psychology
  • 2014
This chapter discusses the role of language, representation, andreasoning in the development of knowledge in the context of international relations.


Learning Causal Models of Relational Domains
This paper presents an algorithm, relational PC, that learns causal dependencies in a state-of-the-art relational representation, and identifies the key representational and algorithmic innovations that make the algorithm possible.
Reasoning about Independence in Probabilistic Models of Relational Data
This work provides a new representation, the abstract ground graph, that enables a sound, complete, and computationally ecient method for answering d-separation queries about relational models, and presents empirical results that demonstrate eectiveness.
Learning Probabilistic Models of Link Structure
This paper proposes two mechanisms for representing a probabilistic distribution over link structures: reference uncertainty and existence uncertainty, and describes the appropriate conditions for using each model and present learning algorithms for each.
A Sound and Complete Algorithm for Learning Causal Models from Relational Data
This work presents the relational causal discovery algorithm (RCD), a complete algorithm that learns causal relational models and proves that RCD is sound and complete, and presents empirical results that demonstrate effectiveness.
Propositionalization approaches to relational data mining
An extension to the LINUS propositionalization method that overcomes the system's earlier inability to deal with non-determinate local variables is described, and it is shown that in many relational data mining applications this can be done without loss of predictive performance.
Causality: Models, Reasoning and Inference
1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5.
Probabilistic Entity-Relationship Models, PRMs, and Plate Models
We introduce a graphical language for re- lational data called the probabilistic entity- relationship (PER) model. The model is an extension of the entity-relationship model, a common model for the
Causation, Prediction, and Search
Although Testing Statistical Hypotheses of Equivalence has some weaknesses, it is a useful reference for those interested in the question of equivalence testing, particularly in biological applications.
Probabilistic Relational Models
  • L. Getoor, B. Taskar
  • Computer Science
    Encyclopedia of Social Network Analysis and Mining
  • 2007
This chapter contains sections titled: Introduction, PRM Representation, The Difference between PRMs and Bayesian Networkss, PRMs with Structural Uncertainty, Probabilistic Model of Link Structure,
Experimental and Quasi-Experimental Designs for Generalized Causal Inference
1. Experiments and Generalized Causal Inference 2. Statistical Conclusion Validity and Internal Validity 3. Construct Validity and External Validity 4. Quasi-Experimental Designs That Either Lack a