Corpus ID: 232110828

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings

@article{Chen2021ImprovingCE,
  title={Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings},
  author={Lili Chen and Kimin Lee and A. Srinivas and P. Abbeel},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.02886}
}
Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of… Expand
Dropout Q-Functions for Doubly Efficient Reinforcement Learning
TLDR
This work proposes a method of improving computational efficiency called Dr.Q, which is a variant of REDQ that uses a small ensemble of dropout Q-functions that achieves comparable sample efficiency with REDQ and much better computational efficiency with that of SAC. Expand
Fractional Transfer Learning for Deep Model-Based Reinforcement Learning
TLDR
Fractional transfer learning is presented, the idea is to transfer fractions of knowledge, opposed to discarding potentially useful knowledge as is commonly done with random initialization, using the World Model-based Dreamer algorithm. Expand
On The Transferability of Deep-Q Networks
TLDR
The results show that transferring neural networks in a DRL context can be particularly challenging and is a process which in most cases results in negative transfer, and in the attempt of understanding why Deep-Q Networks transfer so poorly, novel insights are gained into the training dynamics that characterizes this family of algorithms. Expand

References

SHOWING 1-10 OF 53 REFERENCES
Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
TLDR
A simple approach capable of matching state-of-the-art model-free and model-based algorithms on MuJoCo control tasks and demonstrating robustness to observational noise, surpassing existing approaches in this setting. Expand
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Understanding and Improvement
TLDR
This paper proposes Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform and fast to compute. Expand
Deep Reinforcement Learning with Double Q-Learning
TLDR
This paper proposes a specific adaptation to the DQN algorithm and shows that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. Expand
Rainbow: Combining Improvements in Deep Reinforcement Learning
TLDR
This paper examines six extensions to the DQN algorithm and empirically studies their combination, showing that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. Expand
A Distributional Perspective on Reinforcement Learning
TLDR
This paper argues for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent, and designs a new algorithm which applies Bellman's equation to the learning of approximate value distributions. Expand
Human-level control through deep reinforcement learning
TLDR
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. Expand
Reinforcement Learning with Augmented Data
TLDR
It is shown that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. Expand
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
TLDR
This paper proposes soft actor-critic, an off-policy actor-Critic deep RL algorithm based on the maximum entropy reinforcement learning framework, and achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off- policy methods. Expand
Decoupling Representation Learning from Reinforcement Learning
TLDR
A new unsupervised learning task, called Augmented Temporal Contrast (ATC), which trains a convolutional encoder to associate pairs of observations separated by a short time difference, under image augmentations and using a contrastive loss. Expand
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
TLDR
The addition of the augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based methods and recently proposed contrastive learning (CURL). Expand
...
1
2
3
4
5
...