World Models

@article{Ha2018WorldM,
  title={World Models},
  author={David Ha and J{\"u}rgen Schmidhuber},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.10122}
}
  • David Ha, Jürgen Schmidhuber
  • Published in ArXiv 2018
  • Mathematics, Computer Science
  • We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 69 CITATIONS

    GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Mega-Reward: Achieving Human-Level Play without Extrinsic Rewards

    VIEW 4 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Extending World Models for Multi-Agent Reinforcement Learning in MALMÖ

    VIEW 5 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for Model-based Control

    VIEW 15 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Emergent Communication with World Models

    VIEW 4 EXCERPTS
    CITES METHODS
    HIGHLY INFLUENCED

    Efficient Reinforcement Learning with a Thought-Game for StarCraft

    VIEW 5 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    Engineered Self-Organization for Resilient Robot Self-Assembly with Minimal Surprise

    VIEW 6 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Reinforcement Learning for Extended Reality: Designing Self-Play Scenarios

    VIEW 4 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2018
    2020

    CITATION STATISTICS

    • 12 Highly Influenced Citations

    • Averaged 29 Citations per year from 2018 through 2019

    • 156% Increase in citations per year in 2019 over 2018

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 110 REFERENCES

    URL https://arxiv

    • G. Brockman, V. Cheung, +6 authors June
    • org/abs/1606.01540.
    • 2016
    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    The CMA Evolution Strategy: A Tutorial

    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    Deep learning in neural networks: An overview

    VIEW 8 EXCERPTS

    First Experiments with PowerPlay

    VIEW 6 EXCERPTS

    An on-line algorithm for dynamic reinforcement learning and planning in reactive environments

    • Jürgen Schmidhuber
    • Computer Science
    • 1990 IJCNN International Joint Conference on Neural Networks
    • 1990
    VIEW 18 EXCERPTS