Using State Predictions for Value Regularization in Curiosity Driven Deep Reinforcement Learning

  title={Using State Predictions for Value Regularization in Curiosity Driven Deep Reinforcement Learning},
  author={Gino Brunner and Manuel Fritsche and Oliver Richter and Roger Wattenhofer},
  journal={2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)},
Learning in sparse reward settings remains a challenge in Reinforcement Learning, which is often addressed by using intrinsic rewards. One promising strategy is inspired by human curiosity, requiring the agent to learn to predict the future. In this paper a curiosity-driven agent is extended to use these predictions directly for training. To achieve this, the agent predicts the value function of the next state at any point in time. Subsequently, the consistency of this prediction with the… Expand
3 Citations
Multi-agent reinforcement learning with directed exploration and selective memory reuse
Mathematical Reasoning in Latent Space
  • 15
  • PDF


Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks
  • 48
  • PDF
Curiosity-Driven Exploration by Self-Supervised Prediction
  • 963
  • Highly Influential
  • PDF
Reinforcement Learning with Unsupervised Auxiliary Tasks
  • 724
  • PDF
Learning to Play with Intrinsically-Motivated Self-Aware Agents
  • 60
  • PDF
Asynchronous Methods for Deep Reinforcement Learning
  • 4,004
  • Highly Influential
  • PDF
Curiosity-Driven Reinforcement Learning with Homeostatic Regulation
  • 18
  • PDF
ViZDoom: A Doom-based AI research platform for visual reinforcement learning
  • 455
  • Highly Influential
  • PDF
Count-Based Exploration with Neural Density Models
  • 218
  • PDF
Unifying Count-Based Exploration and Intrinsic Motivation
  • 671
  • PDF