Corpus ID: 6154895

Automatic Induction of MAXQ Hierarchies

  title={Automatic Induction of MAXQ Hierarchies},
  author={N. Mehta and Mike Wynkoop and Soumya Ray and Prasad Tadepalli and T. Dietterich},
Scaling up reinforcement learning to large domains requires leveraging the structure in the domain. Hierarchical reinforcement learning has been one of the ways in which the domain structure is exploited to constrain the value function space of the learner, and speed up learning[10, 3, 1]. In the MAXQ framework, for example, a task hierarchy is defined, and a set of relevant features to represent the completion function for each task-subtask pair are given [3], resulting in decomposed subtask… Expand
5 Citations

Figures from this paper

Automatic task decomposition and state abstraction from demonstration
  • 22
  • PDF
Leveraging attention focus for effective reinforcement learning in complex domains
Learning MDP Action Models Via Discrete Mixture Trees
  • 6
  • PDF
Automatic State Abstraction from Demonstration
  • 38
  • PDF
State Abstraction as Compression in Apprenticeship Learning
  • 21
  • PDF


Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition
  • 1,327
  • PDF
Q-Cut - Dynamic Discovery of Sub-goals in Reinforcement Learning
  • 232
  • PDF
Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning
  • 2,385
  • PDF
Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density
  • 436
  • PDF
Finding Structure in Reinforcement Learning
  • 220
  • PDF
Using relative novelty to identify useful temporal abstractions in reinforcement learning
  • 198
State abstraction for programmable reinforcement learning agents
  • 251
  • PDF
Causal Graph Based Decomposition of Factored MDPs
  • 96
  • Highly Influential
  • PDF
PolicyBlocks: An Algorithm for Creating Useful Macro-Actions in Reinforcement Learning
  • 101
  • PDF
Discovering Hierarchy in Reinforcement Learning with HEXQ
  • 226
  • Highly Influential
  • PDF