Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks

@article{Nasiriany2022AugmentingRL,
  title={Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks},
  author={Soroush Nasiriany and Huihan Liu and Yuke Zhu},
  journal={2022 International Conference on Robotics and Automation (ICRA)},
  year={2022},
  pages={7477-7484}
}
Realistic manipulation tasks require a robot to interact with an environment with a prolonged sequence of motor actions. While deep reinforcement learning methods have recently emerged as a promising paradigm for automating manipulation behaviors, they usually fall short in long-horizon tasks due to the exploration burden. This work introduces Manipulation Primitive-augmented reinforcement Learning (MAPLE), a learning framework that augments standard reinforcement learning algorithms with a pre… 

Figures and Tables from this paper

Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives

A simple change to the action interface of the RL algorithm with the robot substantially improves both the learning efficiency and task performance irrespective of the underlying RL algorithm, significantly outperforming prior methods which learn skills from offline expert data.

Guided Skill Learning and Abstraction for Long-Horizon Manipulation

This work proposes an integrated task planning and skill learning framework named LEAGUE (Learning and Abstraction with Guidance), which leverages symbolic interface of a task planner to guide RL-based skill learning and creates abstract state space to enable skill reuse.

Towards Factory-Scale Edge Robotic Systems: Challenges and Research Directions

Technical challenges in the context of two Edge Robotics use cases such as conveyer object pick-up and robot navigation, which are representative of time-critical control in IoT applications are discussed.

Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks

This work introduces Active Task Randomization (ATR), an approach that learns visuomotor skills for sequential manipulation by automatically creating feasible and novel tasks in simulation by developing a relational neural network that maps each task parameter into a compact embedding.

Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation

This paper proposes PlAnning with Spatial and Temporal Abstraction (PASTA), which incorporates both spatial abstraction (reasoning about objects and their relations to each other) and temporal abstraction ( Reasoning over skills instead of low-level actions).

TAPS: Task-Agnostic Policy Sequencing

This work presents Task-Agnostic Policy Sequencing (TAPS), a scalable framework for training manipulation primitives and coordinating their geometric dependencies at plan-time to solve long-horizon tasks never seen by any primitive during training.

Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments

A hierarchical learning framework, named PRELUDE, is presented, which decomposes the problem of perceptive locomotion into high-level decision-making to predict navigation commands and low-level gait generation to realize the target commands.

Sampling Through the Lens of Sequential Decision Making

This work applies ASR to the long-standing sampling problems in similarity-based loss functions, and explores geographical relationships among samples by distance-based sampling to maximize overall cumulative reward.

Causal Dynamics Learning for Task-Independent State Abstraction

This paper introduces Causal Dynamics Learning for Task-Independent State Abstraction ( CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states.

Exploring with Sticky Mittens: Reinforcement Learning with Expert Interventions via Option Templates

This work proposes a framework for leveraging expert intervention as allowing the agent to execute option templates before learning an implementation, and evaluates its approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by two orders of magnitude.

References

SHOWING 1-10 OF 81 REFERENCES

Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives

A simple change to the action interface of the RL algorithm with the robot substantially improves both the learning efficiency and task performance irrespective of the underlying RL algorithm, significantly outperforming prior methods which learn skills from offline expert data.

Learning compositional models of robot skills for task and motion planning

This work uses Gaussian process methods for learning the constraints on skill effectiveness from small numbers of expensive-to-collect training examples and develops efficient adaptive sampling methods for generating a comprehensive and diverse sequence of continuous candidate control parameter values during planning.

Learning to combine primitive skills: A step towards versatile robotic manipulation §

This work aims to overcome previous limitations and propose a reinforcement learning (RL) approach to task planning that learns to combine primitive skills and proposes an efficient training of basic skills from few synthetic demonstrations by exploring recent CNN architectures and data augmentation.

Overcoming Exploration in Reinforcement Learning with Demonstrations

This work uses demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm.

Efficient Bimanual Manipulation Using Learned Task Schemas

It is shown that explicitly modeling the schema’s state-independence can yield significant improvements in sample efficiency for model-free reinforcement learning algorithms and can be transferred to solve related tasks, by simply re-learning the parameterizations with which the skills are invoked.

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

A scalable and intuitive framework for specifying new tasks through user-provided examples of desired outcomes, a multi-robot collective learning system for data collection that simultaneously collects experience for multiple tasks, and a scalable and generalizable multitask deep reinforcement learning method, which is called MTOpt are developed.

ReLMoGen: Integrating Motion Generation in Reinforcement Learning for Mobile Manipulation

It is argued that, by lifting the action space and by leveraging sampling-based motion planners, this work can efficiently use RL to solve complex, long-horizon tasks that could not be solved with existing RL methods in the original action space.

QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

QT-Opt is introduced, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real- world grasping that generalizes to 96% grasp success on unseen objects.

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations

This work shows that model-free DRL with natural policy gradients can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments.

Parrot: Data-Driven Behavioral Priors for Reinforcement Learning

This paper proposes a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials from a wide range of previously seen tasks, and shows how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
...