Curious Hierarchical Actor-Critic Reinforcement Learning

@article{Rder2020CuriousHA,
  title={Curious Hierarchical Actor-Critic Reinforcement Learning},
  author={Frank R{\"o}der and Manfred Eppe and Phuong D. H. Nguyen and Stefan Wermter},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.03420}
}
Hierarchical abstraction and curiosity-driven exploration are two common paradigms in current reinforcement learning approaches to break down difficult problems into a sequence of simpler ones and to overcome reward sparsity. However, there is a lack of approaches that combine these paradigms, and it is currently unknown whether curiosity also helps to perform the hierarchical abstraction. As a novelty and scientific contribution, we tackle this issue and develop a method that combines… 

Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey

TLDR
A typology of methods where deep RL algorithms are trained to tackle the developmental robotics problem of the autonomous acquisition of open-ended repertoires of skills is proposed at the intersection of deep RL and developmental approaches.

Hierarchical principles of embodied reinforcement learning: A review

TLDR
All important cognitive mechanisms have been implemented independently in isolated computational architectures and there is simply a lack of approaches that integrate them appropriately, which should guide the development of more sophisticated cognitively inspired hierarchical methods.

Reinforcement Learning with Time-dependent Goals for Robotic Musicians

TLDR
This paper addresses robotic musicianship by introducing a temporal extension to goal-conditioned reinforcement learning: Time-dependent goals, and demonstrates that these can be used to train a robotic musician to play the theremin instrument.

Robotic self-representation improves manipulation skills and transfer learning

TLDR
This paper develops a model that learns bidirectional action-effect associations to encode the representations of body schema and the peripersonal space from multisensory information, which is named multimodal BidAL and demonstrates that it significantly stabilizes the learning-based problem-solving under noisy conditions and improves transfer learning of robotic manipulation skills.

Towered Actor Critic For Handling Multiple Action Types In Reinforcement Learning For Drug Discovery

TLDR
A novel framework, towered actor critic (TAC), to handle multiple action types and is combined with TD3 to empirically obtain significantly better results than existing methods in the drug discovery setting.

The Embodied Crossmodal Self Forms Language and Interaction: A Computational Cognitive Review

TLDR
It is hypothesized that the development of computational models of the self are very appropriate for considering joint verbal and physical interaction and provide the substantial potential to foster the psychological and cognitive understanding of language grounding.

Intelligent problem-solving as integrated hierarchical reinforcement learning

According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms. Hierarchical

Hierarchical learning from human preferences and curiosity

TLDR
A novel hierarchical reinforcement learning method that introduces non-expert human preferences at the high-level, and curiosity to drastically speed up the convergence of subpolicies to reach any sub-goals, which drastically reduces the amount of human effort required over standard imitation learning approaches.

Survey on reinforcement learning for language processing

TLDR
The state of the art of RL methods for their possible use for different problems of NLP, focusing primarily on conversational systems, is reviewed, mainly due to their growing relevance.

HAC Explore: Accelerating Exploration with Hierarchical Reinforcement Learning

TLDR
HAC Explore is the first RL method to solve a sparse reward, continuous-control task that requires over 1,000 actions and outperforms either component method on its own, as well as an existing approach to combining hierarchy and exploration, in a set of difficult simulated robotics tasks.

References

SHOWING 1-10 OF 32 REFERENCES

FeUdal Networks for Hierarchical Reinforcement Learning

We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and

Meta-learning curiosity algorithms

TLDR
This work proposes a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains.

Language as an Abstraction for Hierarchical Deep Reinforcement Learning

TLDR
This paper introduces an open-source object interaction environment built using the MuJoCo physics engine and the CLEVR engine and finds that, using the approach, agents can learn to solve to diverse, temporally-extended tasks such as object sorting and multi-object rearrangement, including from raw pixel observations.

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation

TLDR
h-DQN is presented, a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning, and allows for flexible goal specifications, such as functions over entities and relations.

Curiosity-driven exploration enhances motor skills of continuous actor-critic learner

TLDR
This work investigates the role of incremental learning of predictive models in generating curiosity, an intrinsic motivation, for directing the agent's choice of action and proposes a curiosity-driven reinforcement learning algorithm for continuous motor control.

COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration

TLDR
This Curious Object-Based seaRch Agent (COBRA) uses task-free intrinsically motivated exploration and unsupervised learning to build object-based models of its environment and action space and can learn a variety of tasks through model-based search in very few steps and excel on structured hold-out tests of policy robustness.

Large-Scale Study of Curiosity-Driven Learning

TLDR
This paper performs the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite, and shows surprisingly good performance.

Curiosity-Driven Exploration by Self-Supervised Prediction

TLDR
This work forms curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model, which scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and ignores the aspects of the environment that cannot affect the agent.

Reinforcement Learning with Unsupervised Auxiliary Tasks

TLDR
This paper significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% Expert human performance on Labyrinth.

Data-Efficient Hierarchical Reinforcement Learning

TLDR
This paper studies how to develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control.