ACRE: Abstract Causal REasoning Beyond Covariation

@article{Zhang2021ACREAC,
  title={ACRE: Abstract Causal REasoning Beyond Covariation},
  author={Chi Zhang and Baoxiong Jia and Mark Edmonds and Song-Chun Zhu and Yixin Zhu},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={10638-10648}
}
  • Chi Zhang, Baoxiong Jia, Yixin Zhu
  • Published 26 March 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Causal induction, i.e., identifying unobservable mechanisms that lead to the observable relations among variables, has played a pivotal role in modern scientific discovery, especially in scenarios with only sparse and limited data. Humans, even young toddlers, can induce causal relationships surprisingly well in various settings despite its notorious difficulty. However, in contrast to the commonplace trait of human cognition is the lack of a diagnostic benchmark to measure causal induction for… 

Figures and Tables from this paper

Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning
TLDR
This work designs a suite of benchmarking RL environments and evaluates various representation learning algorithms from the literature and finds that explicitly incorporating structure and modularity in models can help causal induction in model-based reinforcement learning.
Towards Understanding How Machines Can Learn Causal Overhypotheses
TLDR
This work presents a new benchmark—a flexible environment which allows for the evaluation of existing techniques under variable causal overhypotheses—and demonstrates that many existing state-of-the-art methods have trouble generalizing in this environment.
Learning Causal Overhypotheses through Exploration in Children and Computational Models
TLDR
There are significant differences between information-gain optimal RL exploration in causal environments and the exploration of children in the same environments, and this work introduces a novel RL environment designed with a controllable causal structure, which allows to evaluate exploration strategies used by both agents and children in a unified environment.
EST: Evaluating Scientific Thinking in Artificial Agents
TLDR
The EST environment is devised for evaluating the scientific thinking ability in artificial agents and clear failure of today’s learning methods in reaching a level of intelligence comparable to humans is noticed.
Causal Reasoning Meets Visual Representation Learning: A Prospective Study
TLDR
This paper conducts a comprehensive review of existing causal reasoning methods for visual representation learning, covering fundamental theories, models, and datasets, and proposes some prospective challenges, opportunities, and future research directions for benchmarking causal reasoning algorithms inVisual representation learning.
Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and Execution
TLDR
A neuro-symbolic Probabilistic Abduction and Execution (PrAE) learner is proposed, central to the process of probabilistic abduction and execution on a probabilism scene representation, akin to the mental manipulation of objects, that improves cross-configuration generalization and is capable of rendering an answer.
Causal Reasoning with Spatial-temporal Representation Learning: A Prospective Study
TLDR
This paper conducts a comprehensive review of existing causal reasoning methods for spatial-temporal representation learning, covering fundamental theories, models, and datasets, and proposes some primary challenges, opportunities, and future research directions for benchmarking causal reasoning algorithms in spatial- temporal representation learning.
Attention over Learned Object Embeddings Enables Complex Visual Reasoning
TLDR
The success of this combination suggests that there may be no need to trade off flexibility for performance on problems involving spatio-temporal or causal-style reasoning, and with the right soft biases and learning objectives in a neural network may be able to attain the best of both worlds.
GRICE: A Grammar-based Dataset for Recovering Implicature and Conversational rEasoning
TLDR
A grammar-based dialogue dataset, GRICE, designed to bring implicature into pragmatic reasoning in the context of conversations, and shows an overall performance boost in conversational reasoning.
...
...

References

SHOWING 1-10 OF 83 REFERENCES
Theory-based causal induction.
TLDR
This work identifies 3 key aspects of abstract prior knowledge-the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships-and shows how they provide the constraints that people need to induce useful causal models from sparse data.
Human Causal Transfer: Challenges for Deep Reinforcement Learning
TLDR
It is found that a standard deep reinforcement learning model (DDQN) is unable to capture the causal abstraction presented between trials with the same causal schema and trials with a transfer of causal schema.
A Theory of Inferred Causation
Learning Perceptual Causality from Video
  • A. Fire, Song-Chun Zhu
  • Computer Science
    AAAI Workshop: Learning Rich Representations from Low-Level Sensors
  • 2013
TLDR
This article provides a framework for the unsupervised learning of this perceptual causal structure from video, and takes action and object status detections as input and uses heuristics suggested by cognitive science research to produce the causal links perceived between them.
Decomposing Human Causal Learning: Bottom-up Associative Learning and Top-down Schema Reasoning
TLDR
This paper adopts a Bayesian framework to model causal theory induction and uses the inferred causal theory to transfer abstract knowledge between similar environments, and trains a simulated agent to discover and transfer useful relational and abstract knowledge.
Learning about causes from people: observational causal learning in 24-month-old infants.
TLDR
The youngest children (24- to 36-month-olds) were more likely to make causal inferences when covariations were the outcome of human interventions than when they were not, and observational causal learning may be a fundamental learning mechanism that enables infants to abstract the causal structure of the world.
Measuring abstract reasoning in neural networks
TLDR
A dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test, is proposed and ways to both measure and induce stronger abstract reasoning in neural networks are introduced.
COPHY: Counterfactual Learning of Physical Dynamics
TLDR
This work proposes a model for learning the physical dynamics in a counterfactual setting for object mechanics from visual input and develops the CoPhy benchmark to assess the capacity of the state-of-the-art models for causal physical reasoning in a synthetic 3D environment.
HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem Solving
TLDR
This paper devise the very first systematic benchmark that offers joint evaluation covering all three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic, centered around a novel task domain, HALMA, for visual concept development and rapid problem solving.
Blickets and babies: the development of causal reasoning in toddlers and infants.
TLDR
Eight- month-olds' anticipatory eye movements, in response to retrospective data, revealed inferences similar to those of 24-month-olds in Experiment 1 and preschoolers in previous research, which is discussed in terms of associative reasoning and causal inference.
...
...