Corpus ID: 235262062

RePReL: Integrating Relational Planning and Reinforcement Learning for Effective Abstraction

@inproceedings{Kokel2021RePReLIR,
  title={RePReL: Integrating Relational Planning and Reinforcement Learning for Effective Abstraction},
  author={Harsha Kokel and Arjun Manoharan and Sriraam Natarajan and Balaraman Ravindran and Prasad Tadepalli},
  booktitle={ICAPS},
  year={2021}
}
State abstraction is necessary for better task transfer in complex reinforcement learning environments. Inspired by the benefit of state abstraction in MAXQ and building upon hybrid planner-RL architectures, we propose RePReL, a hierarchical framework that leverages a relational planner to provide useful state abstractions. Our experiments demonstrate that the abstractions enable faster learning and efficient transfer across tasks. More importantly, our framework enables the application of… Expand

Figures and Tables from this paper

Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
TLDR
Experiments show that NSRTs can be learned after only tens or hundreds of training episodes, and then used for fast planning in new tasks that require up to 60 actions to reach the goal and involve many more objects than were seen during training. Expand

References

SHOWING 1-10 OF 38 REFERENCES
State Abstraction in MAXQ Hierarchical Reinforcement Learning
TLDR
This paper defines five conditions under which state abstraction can be combined with the MAXQ value function decomposition and proves that the MAX Q learning algorithm converges under these conditions and shows experimentally that state abstraction is important for the successful application of MAXQ-Q learning. Expand
State Abstractions for Lifelong Reinforcement Learning
TLDR
It is shown that the joint family of transitive PAC abstractions can be acquired efficiently, preserve near optimal-behavior, and experimentally reduce sample complexity in simple domains, thereby yielding a family of desirable abstractions for use in lifelong reinforcement learning. Expand
Symbolic Plans as High-Level Instructions for Reinforcement Learning
TLDR
An empirical evaluation shows that the use of techniques from knowledge representation and reasoning as a framework for defining final-state goal tasks and automatically producing their corresponding reward functions converges to near-optimal solutions faster than standard RL and HRL methods. Expand
Combining Reinforcement Learning with Symbolic Planning
TLDR
This paper proposes a method, PLANQ-learning, that couples a Q-learner with a STRIPS planner, and shows significant improvements in scaling-up behaviour as the state-space grows larger, compared to both standard Q- learning and hierarchical Q-learning methods. Expand
State abstraction for programmable reinforcement learning agents
TLDR
This paper explores safe state abstraction in hierarchical reinforcement learning, where learned behaviors must conform to a given partial, hierarchical program, and shows how to achieve this for a partial programming language that is essentially Lisp augmented with nondeterministic constructs. Expand
Modular Multitask Reinforcement Learning with Policy Sketches
TLDR
Experiments show that using the approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks. Expand
From Semantics to Execution: Integrating Action Planning With Reinforcement Learning for Robotic Causal Problem-Solving
TLDR
The paper demonstrates how the reward-sparsity can serve as a bridge between the high-level and low-level state- and action spaces and demonstrate that the integrated method is able to solve robotic tasks that involve non-trivial causal dependencies under noisy conditions, exploiting both data and knowledge. Expand
SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging Symbolic Planning
TLDR
This paper introduces symbolic planning into DRL and proposes a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning, and experimental results validate the interpretability of subtasks, along with improved data efficiency compared with state-of-the-art approaches. Expand
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
TLDR
h-DQN is presented, a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning, and allows for flexible goal specifications, such as functions over entities and relations. Expand
Deep reinforcement learning with relational inductive biases
TLDR
The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases, which can offer advantages in efficiency, generalization, and interpretability. Expand
...
1
2
3
4
...