Towards Coordinated Robot Motions: End-to-End Learning of Motion Policies on Transform Trees

@article{Rana2021TowardsCR,
  title={Towards Coordinated Robot Motions: End-to-End Learning of Motion Policies on Transform Trees},
  author={Muhammad Asif Rana and Anqi Li and Dieter Fox and S. Chernova and Byron Boots and Nathan D. Ratliff},
  journal={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2021},
  pages={7792-7799}
}
  • M. A. Rana, Anqi Li, Nathan D. Ratliff
  • Published 24 December 2020
  • Computer Science
  • 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Generating robot motion that fulfills multiple tasks simultaneously is challenging due to the geometric constraints imposed on the robot. In this paper, we propose to solve multi-task problems through learning structured policies from human demonstrations. Our structured policy is inspired by RMPflow, a framework for combining subtask policies on different spaces. The policy structure provides the user an interface to 1) specifying the spaces that are directly relevant to the completion of the… 
1 Citations

Figures from this paper

RMP2: A Structured Composable Policy Class for Robot Learning

TLDR
The message passing algorithm of RMPflow is reexamine and a more efficient alternate algorithm is proposed, called RMP, that uses modern automatic differentiation tools (such as TensorFlow and PyTorch) to compute R MPflow policies.

References

SHOWING 1-10 OF 38 REFERENCES

Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations

TLDR
This work decomposes a task into multiple subtasks and learns to reproduce the subtasks by learning stable policies by leveraging the RMPflow framework for motion generation, and finds a stable global policy in the configuration space that enables simultaneous execution of various learned subtasks.

Guiding Trajectory Optimization by Demonstrated Distributions

TLDR
This letter learns a distribution of trajectories demonstrated by human experts and uses it to guide the trajectory optimization process, which optimizes the trajectory to avoid obstacles and encodes the demonstrated behavior in the resulting trajectory.

Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable Dynamical Systems

TLDR
This work presents an approach to learn such motions from a limited number of human demonstrations by exploiting the regularity properties of human motions e.g. stability, smoothness, and boundedness by exploiting a composition of simple parameterized diffeomorphisms.

Probabilistic Prioritization of Movement Primitives

TLDR
This letter combines Bayesian task prioritization with probabilistic movement primitives (ProMPs) to prioritize full motion sequences that are learned from demonstrations, and demonstrates how the task priorities can be obtained from imitation learning and how different primitives can be combined to solve even unseen task-combinations.

Towards Robust Skill Generalization: Unifying Learning from Demonstration and Motion Planning

TLDR
This paper provides a new probabilistic skill model based on a stochastic dynamical system that requires minimal parameter tuning to learn, is suitable to encode skill constraints, and allows efficient inference.

RMPflow: A Computational Graph for Automatic Motion Policy Generation

TLDR
A novel policy synthesis algorithm, RMPflow, based on geometrically consistent transformations of Riemannian Motion Policies, which can consistently combine these local policies to generate an expressive global policy, while simultaneously exploiting sparse structure for computational efficiency.

A survey of robot learning from demonstration

Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping

TLDR
It is proved that, under mild restrictions on the weight functions, R MPfusion always yields a globally Lyapunov-stable motion policy, which implies that RMPfusion can be treated as a structured policy class in policy optimization that is guaranteed to generate stable policies, even during the immature phase of learning.

Movement primitives via optimization

We formalize the problem of adapting a demonstrated trajectory to a new start and goal configuration as an optimization problem over a Hilbert space of trajectories: minimize the distance between the