Corpus ID: 202774726

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction

@inproceedings{Freeman2019LearningTP,
  title={Learning to Predict Without Looking Ahead: World Models Without Forward Prediction},
  author={C. Daniel Freeman and Luke Metz and David Ha},
  booktitle={NeurIPS},
  year={2019}
}
  • C. Daniel Freeman, Luke Metz, David Ha
  • Published in NeurIPS 2019
  • Computer Science
  • Much of model-based reinforcement learning involves learning a model of an agent's world, and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware---e.g., a brain---arose as the byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.
    SHOWING 1-4 OF 4 CITATIONS

    Neuroevolution of Self-Interpretable Agents

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Reinforcement Learning Method with Internal World Model Training

    VIEW 1 EXCERPT
    CITES BACKGROUND

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 68 REFERENCES

    Recurrent World Models Facilitate Policy Evolution

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Auto-Encoding Variational Bayes

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Model-Based Reinforcement Learning for Atari

    VIEW 3 EXCERPTS

    Learning Latent Dynamics for Planning from Pixels

    VIEW 3 EXCERPTS