• Corpus ID: 235670135

Learning Task Informed Abstractions

@article{Fu2021LearningTI,
  title={Learning Task Informed Abstractions},
  author={Xiang Fu and Ge Yang and Pulkit Agrawal and T. Jaakkola},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.15612}
}
Current model-based reinforcement learning methods struggle when operating from complex visual scenes due to their inability to prioritize task-relevant features. To mitigate this problem, we propose learning Task Informed Abstractions (TIA) that explicitly separates rewardcorrelated visual features from distractors. For learning TIA, we introduce the formalism of Task Informed MDP (TiMDP) that is realized by training two models that learn visual features via cooperative reconstruction, but one… 

INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL

TLDR
The key principle behind the design is to integrate a term inspired by variational empowerment into a state-space model based on mutual information that prioritizes information that is correlated with action, thus ensur-ing that functionally relevant factors are captured during the RL process.

Robust Deep Reinforcement Learning via Multi-View Information Bottleneck

TLDR
The results show that KL balancing can improve training of RSSM with a contrastive-learning-based or mutual-information-maximization objective, and the approach outperforms well-established baselines for generalization to unseen environments on the Procgen benchmark.

Look where you look! Saliency-guided Q-networks for visual RL tasks

TLDR
SGQN vastly improves the generalization capability of Soft Actor-Critic agents and outperforms existing state-of-the-art methods on the Deepmind Control Generalization benchmark, setting a new reference in terms of training efficiency, generalization gap, and policy interpretability.

Denoised MDPs: Learning World Models Better Than the World Itself

TLDR
This work categorizes information out in the wild into four types based on controllability and relation with reward, and formulate useful information as that which is both controllable and reward-relevant.

Task-Independent Causal State Abstraction

TLDR
This paper introduces a novel state abstraction called Task-Independent Causal State Abstraction (TICSA), and observes that both the dynamics model and policies learned by the proposed method generalize well to unseen states and that TICSA also improves sample efficiency compared to learning without state abstraction.

Learning Representations for Pixel-based Control: What Matters and Why?

TLDR
This paper presents a simple baseline approach that can learn meaningful representations with no metric-based learning, no data augmentations, no world-model learning, and no contrastive learning and hopes this view can motivate researchers to rethink representation learning when investigating how to best apply RL to real-world tasks.

Causal Dynamics Learning for Task-Independent State Abstraction

TLDR
This paper introduces Causal Dynamics Learning for Task-Independent State Abstraction ( CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states.

CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning

TLDR
CARL is proposed, a collection of well-known RL environments extended to contextual RL problems to study generalization and allows first evidence that disentangling representation learning of the states from the policy learning with the context facilitates better generalization.

Contextualize Me -- The Case for Context in Reinforcement Learning

TLDR
This work shows that theoretically optimal behavior in contextual Markov Decision Processes requires explicit context information, and introduces the first benchmark library designed for generalization based on cRL extensions of popular benchmarks, CARL.

L EARNING R OBUST T ASK C ONTEXT WITH H YPOTHETI CAL A NALOGY -M AKING

TLDR
Inspired by the human analogy-making process, a novel representation learning framework called Hypothetical Analogy-Making (HAM) is proposed, which enables RL agents to learn compact and explainable context-relevant features that can generalize to unseen tasks.

References

SHOWING 1-10 OF 66 REFERENCES

Learning Invariant Representations for Reinforcement Learning without Reconstruction

TLDR
This work studies how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction, and proposes a method to learn robust latent representations which encode only the task-relevant information from observations.

Dream to Control: Learning Behaviors by Latent Imagination

TLDR
Dreamer is presented, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination and efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model.

Imagination-Augmented Agents for Deep Reinforcement Learning

TLDR
Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects, shows improved data efficiency, performance, and robustness to model misspecification compared to several baselines.

Decoupling Representation Learning from Reinforcement Learning

TLDR
A new unsupervised learning task, called Augmented Temporal Contrast (ATC), which trains a convolutional encoder to associate pairs of observations separated by a short time difference, under image augmentations and using a contrastive loss.

DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

TLDR
A new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act, which significantly outperforms conventional baselines in zero-shot domain adaptation scenarios.

Improving Sample Efficiency in Model-Free Reinforcement Learning from Images

TLDR
A simple approach capable of matching state-of-the-art model-free and model-based algorithms on MuJoCo control tasks and demonstrating robustness to observational noise, surpassing existing approaches in this setting.

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

TLDR
CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features and is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.

Deep Reinforcement and InfoMax Learning

TLDR
An objective based on Deep InfoMax (DIM) is introduced which trains the agent to predict the future by maximizing the mutual information between its internal representation of successive timesteps.

Learning Latent Dynamics for Planning from Pixels

TLDR
The Deep Planning Network (PlaNet) is proposed, a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space using a latent dynamics model with both deterministic and stochastic transition components.

Deep visual foresight for planning robot motion

  • Chelsea FinnS. Levine
  • Computer Science
    2017 IEEE International Conference on Robotics and Automation (ICRA)
  • 2017
TLDR
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training.
...