Corpus ID: 219792420

Learning Invariant Representations for Reinforcement Learning without Reconstruction

@article{Zhang2020LearningIR,
  title={Learning Invariant Representations for Reinforcement Learning without Reconstruction},
  author={A. Zhang and Rowan McAllister and R. Calandra and Yarin Gal and Sergey Levine},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.10742}
}
  • A. Zhang, Rowan McAllister, +2 authors Sergey Levine
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn representations that both provide for effective downstream control and invariance to task-irrelevant details. Bisimulation metrics quantify behavioral similarity between states in continuous MDPs, which we propose using to learn robust latent representations which encode only the task-relevant… CONTINUE READING
    Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 38 REFERENCES
    Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
    17
    Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model
    32
    Representation Learning with Contrastive Predictive Coding
    508
    DeepMDP: Learning Continuous Latent Space Models for Representation Learning
    39
    From Pixels to Torques: Policy Learning with Deep Dynamical Models
    102
    Learning Latent Dynamics for Planning from Pixels
    211
    Invariant Causal Prediction for Block MDPs
    5
    A Simple Framework for Contrastive Learning of Visual Representations
    258
    Deep auto-encoder neural networks in reinforcement learning
    208