• Corpus ID: 219966005

Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning

@article{Zhang2020GeneratingAS,
  title={Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning},
  author={Tianren Zhang and Shangqi Guo and Tian Tan and Xiaolin Hu and Feng Chen},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.11485}
}
Goal-conditioned hierarchical reinforcement learning (HRL) is a promising approach for scaling up reinforcement learning (RL) techniques. However, it often suffers from training inefficiency as the action space of the high-level, i.e., the goal space, is often large. Searching in a large goal space poses difficulties for both high-level subgoal generation and low-level policy learning. In this paper, we show that this problem can be effectively alleviated by restricting the high-level action… 

Hierarchical principles of embodied reinforcement learning: A review

All important cognitive mechanisms have been implemented independently in isolated computational architectures and there is simply a lack of approaches that integrate them appropriately, which should guide the development of more sophisticated cognitively inspired hierarchical methods.

Adversarially Guided Subgoal Generation for Hierarchical Reinforcement Learning

This paper proposes a novel HRL approach for mitigating the non-stationarity by adversarially enforcing the high- level policy to generate subgoals compatible with the current instantiation of the low-level policy.

Hierarchical Reinforcement Learning with Adversarially Guided Subgoals

A novel HRL approach for mitigating the non-stationarity by adversarially enforcing the high-level policy to generate subgoals compatible with the current instantiation of the lowlevel policy is proposed.

EAT-C: Environment-Adversarial sub-Task Curriculum for RL

A curriculum of tasks with coupled environments, generated by two policies trained jointly with RL, enables an easy-to-hard curriculum for every policy and compares EAT-C with RL/planning targeting similar problems and methods with environment generators or adversarial agents.

MASER: Multi-Agent Reinforcement Learning with Subgoals Generated from Experience Replay Buffer

A novel method named MASER is proposed: MARL with subgoals generated from experience replay buffer with significantly outperforms StarCraft II micromanagement benchmark compared to other state-of-the-art MARL algorithms.

Searching Latent Sub-Goals in Hierarchical Reinforcement Learning as Riemannian Manifold Optimization

Experiments on a series of MuJoCo tasks with visual observation show that the proposed Riemannian manifold optimization, compared with the baseline that directly searches for sub-goals in bounded latent space, improves the success rate by 1.5 times on average.

Learning a Distance Metric over Markov Decision Processes A Thesis Presented in Partial Fulfillment of the Honors Bachelor’s Degree

This thesis learns the minimum distance between two states over all policies and uses this learned distance function for reward shaping to partially solve a high-dimensional MuJoCo AntMaze domain that is challenging with sparse rewards.

Generalizing to New Tasks via One-Shot Compositional Subgoals

CASE is introduced which attempts to ad- dress imitation learning issues by training an Imitation Learning agent using adaptive “near future" subgoals, and consistently outperforms the previous state-of-the-art compositional Imitation learning approach.

Hierarchical Imitation Learning via Subgoal Representation Learning for Dynamic Treatment Recommendation

A novel Subgoal conditioned HIL framework (short for SHIL), where a high-level policy sequentially sets a subgoal for each sub-task without prior knowledge, and the low- level policy for sub-tasks is learned to reach the subgoal.

Intelligent problem-solving as integrated hierarchical reinforcement learning

According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms. Hierarchical

References

SHOWING 1-10 OF 50 REFERENCES

Near-Optimal Representation Learning for Hierarchical Reinforcement Learning

Results on a number of difficult continuous-control tasks show that the developed notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation, yields qualitatively better representations as well as quantitatively better hierarchical policies compared to existing methods.

Data-Efficient Hierarchical Reinforcement Learning

This paper studies how to develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control.

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation

h-DQN is presented, a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning, and allows for flexible goal specifications, such as functions over entities and relations.

Unsupervised Methods For Subgoal Discovery During Intrinsic Motivation in Model-Free Hierarchical Reinforcement Learning

This paper offers an original approach to HRL that does not require the acquisition of a model of the environment, suitable for large-scale applications, and demonstrates the efficiency of the method on two RL problems with sparse delayed feedback.

Mapping State Space using Landmarks for Universal Goal Reaching

The method explicitly models the environment in a hierarchical manner, with a high-level dynamic landmark-based map abstracting the visited state space, and a low-level value network to derive precise local decisions that enable the agent to reach long-range goals at the early training stage.

Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?

This work isolates and evaluates the claimed benefits of hierarchical RL on a suite of tasks encompassing locomotion, navigation, and manipulation and finds that most of the observed benefits of hierarchy can be attributed to improved exploration, as opposed to easier policy learning or imposed hierarchical structures.

Exploration via Hindsight Goal Generation

HGG is introduced, a novel algorithmic framework that generates valuable hindsight goals which are easy for an agent to achieve in the short term and are also potential for guiding the agent to reach the actual goal in the long term.

Recent Advances in Hierarchical Reinforcement Learning

This work reviews several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed and discusses extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability.

Stochastic Neural Networks for Hierarchical Reinforcement Learning

This work proposes a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks, and uses Stochastic Neural Networks combined with an information-theoretic regularizer to efficiently pre-train a large span of skills.

Learning Multi-Level Hierarchies with Hindsight

A new Hierarchical Reinforcement Learning (HRL) framework that can overcome the instability issues that arise when agents try to jointly learn multiple levels of policies and is the first to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces.