RMP2: A Structured Composable Policy Class for Robot Learning

@article{Li2021RMP2AS,
  title={RMP2: A Structured Composable Policy Class for Robot Learning},
  author={Anqi Li and Ching-An Cheng and Muhammad Asif Rana and Mandy Xie and Karl Van Wyk and Nathan D. Ratliff and Byron Boots},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.05922}
}
We consider the problem of learning motion policies for acceleration-based robotics systems with a structured policy class. We leverage a multi-task control framework called RMPflow which has been successfully applied in many robotics problems. Using RMPflow as a structured policy class in learning has several benefits, such as sufficient expressiveness, the flexibility to inject different levels of prior knowledge as well as the ability to transfer policies between robots. However… 

Figures and Tables from this paper

Imitation Learning via Simultaneous Optimization of Policies and Auxiliary Trajectories

TLDR
A novel imitation learning technique called Collocation for Demonstration Encoding (CoDE) that operates on only a fixed set of trajectory demonstrations, which generalizes well and more accurately reproduces the demonstrated behavior with fewer guiding trajectories when compared to standard behavioral cloning methods.

RMPs for Safe Impedance Control in Contact-Rich Manipulation

TLDR
This work shows how to combine Riemannian Motion Policies, a class of policies that dynamically generate motion in the presence of safety and collision constraints, with variable impedance operation-space control to learn safer contact-rich manipulation behaviors.

Geometric Fabrics: Generalizing Classical Mechanics to Capture the Physics of Behavior

TLDR
This work develops the theory of fabrics and presents both a collection of controlled experiments examining their theoretical properties and a set of robot system experiments showing improved performance over a well-engineered and hardened implementation of RMPs, the current state-of-the-art in controller design.

Riemannian Motion Policies for Safer Variable Impedance Control

  • Computer Science
  • 2021
TLDR
Riemannian Motion Policies, a class of policies that dynamically gen8 erate motion in the presence of safety and collision constraints, can be combined with variable impedance operation-space control to learn safer contact-rich ma10 nipulation behaviors.

References

SHOWING 1-10 OF 41 REFERENCES

Towards Coordinated Robot Motions: End-to-End Learning of Motion Policies on Transform Trees

TLDR
This paper addresses the challenge of learning motion policies to generate motions for execution of such tasks by decomposing a motion policy into multiple subtask policies, whereby each subtask policy dictates a particular subtask behavior.

RMPflow: A Geometric Framework for Generation of Multitask Motion Policies

TLDR
A novel policy synthesis algorithm, Riemannian motion policy (RMP)flow, based on geometrically consistent transformations of RMPs is developed, which can engender natural behavior that adapts instantaneously to changing surroundings with zero planning while performing manipulation tasks.

Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping

TLDR
It is proved that, under mild restrictions on the weight functions, R MPfusion always yields a globally Lyapunov-stable motion policy, which implies that RMPfusion can be treated as a structured policy class in policy optimization that is guaranteed to generate stable policies, even during the immature phase of learning.

Multi-Objective Policy Generation for Multi-Robot Systems Using Riemannian Motion Policies

TLDR
This paper focuses on multi-objective tasks that can be decomposed into a set of simple subtasks, and adopts Riemannian Motion Policies (RMPs), and proposes a collection of RMPs for common multi-robot subtasks.

Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations

TLDR
This work decomposes a task into multiple subtasks and learns to reproduce the subtasks by learning stable policies by leveraging the RMPflow framework for motion generation, and finds a stable global policy in the configuration space that enables simultaneous execution of various learned subtasks.

RMPflow: A Computational Graph for Automatic Motion Policy Generation

TLDR
A novel policy synthesis algorithm, RMPflow, based on geometrically consistent transformations of Riemannian Motion Policies, which can consistently combine these local policies to generate an expressive global policy, while simultaneously exploiting sparse structure for computational efficiency.

Stable, Concurrent Controller Composition for Multi-Objective Robotic Tasks

TLDR
This paper adopts Riemannian Motion Policies, a recently proposed controller structure in robotics, and, RMPflow, its associated computational framework for combining RMP controllers, and shows that R MPflow can stably combine individually designed subtask controllers that satisfy certain CLF constraints.

Safe Exploration in Continuous Action Spaces

TLDR
This work addresses the problem of deploying a reinforcement learning agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated, and directly adds to the policy a safety layer that analytically solves an action correction formulation per each state.

High-Dimensional Continuous Control Using Generalized Advantage Estimation

TLDR
This work addresses the large number of samples typically required and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias.

Residual Reinforcement Learning for Robot Control

TLDR
This paper studies how to solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL.