Intrinsically Motivated Learning of Causal World Models

@article{Annabi2022IntrinsicallyML,
  title={Intrinsically Motivated Learning of Causal World Models},
  author={Louis Annabi},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.04892}
}
Despite the recent progress in deep learning and reinforcement learning, transfer and generalization of skills learned on specific tasks is very limited compared to human (or animal) intelligence. The lifelong, incremental building of common sense knowledge might be a necessary component on the way to achieve more general intelligence. A promising direction is to build world models capturing the true physical mechanisms hidden behind the sensorimotor interaction with the environment. Here we… 

Figures from this paper

References

SHOWING 1-8 OF 8 REFERENCES

Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning

This work introduces a novel intrinsic reward, called causal curiosity, and shows that it allows reinforcement learning agents to learn optimal sequences of actions, and to discover causal factors in the dynamics.

Causal Dynamics Learning for Task-Independent State Abstraction

This paper introduces Causal Dynamics Learning for Task-Independent State Abstraction ( CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states.

A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms

This work proposes to meta-learn causal structures based on how fast a learner adapts to new distributions arising from sparse distributional changes, e.g. due to interventions, actions of agents and other sources of non-stationarities and shows that causal structures can be parameterized via continuous variables and learned end-to-end.

Learning Neural Causal Models from Unknown Interventions

This paper provides a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data and establishes strong benchmark results on several structure learning tasks.

Learning Neural Causal Models with Active Interventions

This work introduces an active intervention targeting (AIT) method which enables a quick identification of the underlying causal structure of the data-generating process and is applicable for both discrete and continuous optimization formulations of learning the underlying directed acyclic graph (DAG) from data.

Differentiable Causal Discovery from Interventional Data

This work proposes a neural network-based method for discovering causal relationships in data that can leverage interventional data and illustrates the flexibility of the continuous-constrained framework by taking advantage of expressive neural architectures such as normalizing flows.

Model-based reinforcement learning: A survey

This paper comprehensively reviews the key techniques of model-based reinforcement learning, summarizes the characteristics, advantages and defects of each technology, and analyzes the application ofmodel- based reinforcement learning in games, robotics and brain science.

Toward Causal Representation Learning

Fundamental concepts of causal inference are reviewed and related to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research.