Hierarchical Actor-Critic

@article{Levy2017HierarchicalA,
  title={Hierarchical Actor-Critic},
  author={Andrew Levy and Robert Platt and Kate Saenko},
  journal={ArXiv},
  year={2017},
  volume={abs/1712.00948}
}
We present a novel approach to hierarchical reinforcement learning called Hierarchical Actor-Critic (HAC). HAC aims to make learning tasks with sparse binary rewards more efficient by enabling agents to learn how to break down tasks from scratch. The technique uses of a set of actor-critic networks that learn to decompose tasks into a hierarchy of subgoals. We demonstrate that HAC significantly improves sample efficiency in a series of tasks that involve sparse binary rewards and require… CONTINUE READING

Figures, Tables, and Topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 18 CITATIONS

Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?

Ofir Nachum, Haoran Tang, +3 authors Sergey Levine
  • ArXiv
  • 2019
VIEW 8 EXCERPTS
HIGHLY INFLUENCED

References

Publications referenced by this paper.
SHOWING 1-7 OF 7 REFERENCES

MuJoCo: A physics engine for model-based control

  • 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2012
VIEW 1 EXCERPT

Kulkarni , Tejas D , Narasimhan , Karthik , Saeedi , Ardavan , and Tenenbaum , Josh . Hierarchical deep reinforcement learning : Integrating temporal abstraction and intrinsic motivation

D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett
  • Advances in Neural Information Processing Systems