Robot Motion Planning in Learned Latent Spaces

  title={Robot Motion Planning in Learned Latent Spaces},
  author={Brian Ichter and Marco Pavone},
  journal={IEEE Robotics and Automation Letters},
This letter presents latent sampling-based motion planning (L-SBMP), a methodology toward computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this letter, we combine these recent advances with techniques from sampling-based motion planning (SBMP) in order to design a methodology capable of planning for high… 

Figures from this paper

Learned Critical Probabilistic Roadmaps for Robotic Motion Planning

This work proposes a general method to identify critical states via graph-theoretic techniques and learn to predict criticality from only local environment features through global connections within a hierarchical graph, termed Critical Probabilistic Roadmaps.

Constrained Motion Planning Networks X

This work presents Constrained Motion Planning Networks X, a neural planning approach, comprising a conditional deep neural generator and discriminator with neural gradients-based fast projection operator that finds path solutions with high success rates and lower computation times than state-of-the-art traditional path-finding tools on various challenging scenarios.

Learning an Optimal Sampling Distribution for Efficient Motion Planning

A learning-based approach with policy improvement to compute an optimal sampling distribution for use in sampling-based motion planners, motivated by the challenge of whole-body planning for a 31 degree-of-freedom mobile robot.

Neural Manipulation Planning on Constraint Manifolds

It is shown that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems and generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates.

Reaching Through Latent Space: From Joint Statistics to Path Planning in Manipulation

A novel approach to path planning for robotic manipulators is presented, in which paths are produced via iterative optimisation in the latent space of a generative model of robot poses, which leverages gradients through learned models that provide a simple way to combine goal reaching objectives with constraint satisfaction, even in the presence of otherwise non-differentiable constraints.

Motion Planning Networks: Bridging the Gap Between Learning-Based and Classical Motion Planners

This article describes motion planning networks (MPNet), a computationally efficient, learning-based neural planner for solving motion planning problems, and shows that worst-case theoretical guarantees can be proven if this neural network strategy is merged with classical sample-based planners in a hybrid approach.

Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner

STP-Net is proposed, an end-to-end learning framework that can fully extract and leverage important spatio-temporal information to form an efficient neural motion planner and can quickly and simultaneously compute multiple near-optimal paths in multi-robot motion planning tasks.

Learning Equality Constraints for Motion Planning on Manifolds

This work considers the problem of learning representations of constraints from demonstrations with a deep neural network, which it calls Equality Constraint Manifold Neural Network (ECoMaNN), to learn a level-set function of the constraint suitable for integration into a constrained sampling-based motion planner.

MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning Under Kinodynamic Constraints

This work presents a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that quickly finds near-optimal path solutions with worst-case theoretical guarantees under kinodynamic constraints for practical underactuated systems.

Harnessing Reinforcement Learning for Neural Motion Planning

This work proposes a modification of the popular DDPG RL algorithm that is tailored to motion planning domains, by exploiting the known model in the problem and the set of solved plans in the data, and shows that the algorithm can plan significantly faster on novel domains than off-the-shelf sampling based motion planners.



Learning Sampling Distributions for Robot Motion Planning

This paper proposes a methodology for nonuniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling, resulting in an order of magnitude improvement in terms of success rate and convergence to the optimal cost.

Approximate Inference-Based Motion Planning by Learning and Exploiting Low-Dimensional Latent Variable Models

A fully probabilistic generative model is constructed with which a high-dimensional motion planning problem is transformed into a tractable inference problem and the motion trajectory is computed via an approximate inference algorithm based on a variant of the particle filter.

High-dimensional Motion Planning using Latent Variable Models via Approximate Inference

A fully probabilistic generative model is constructed with which to transform a high-dimensional motion planning problem into a tractable inference problem and compute the optimal motion trajectory via an approximate inference algorithm based on a variant of the particle filter.

Universal Planning Networks

This work finds that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images.

Deep visual foresight for planning robot motion

  • Chelsea FinnS. Levine
  • Computer Science
    2017 IEEE International Conference on Robotics and Automation (ICRA)
  • 2017
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training.

Motion Planning Networks

This work presents Motion Planning Networks (MPNet), a neural network-based novel planning algorithm that encodes the given workspaces directly from a point cloud measurement and generates the end-to-end collision-free paths for the given start and goal configurations.

Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction

The approach is to learn multimodal probability distributions over future human actions from a dataset of human-human exemplars and perform real-time robot policy construction in the resulting environment model through massively parallel sampling of human responses to candidate robot action sequences.

Fastron : A Learning-Based Configuration Space Model for Rapid Collision Detection for Gross Motion Planning in Changing Environments

Collision detection is a necessary but costly step for sampling-based motion planners, such as Rapidly-Exploring Random Trees [7]. Motion planning is typically performed in configuration space

Deep spatial autoencoders for visuomotor learning

This work presents an approach that automates state-space construction by learning a state representation directly from camera images by using a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects.

Learning visual representations for perception-action systems

This work argues in favor of task-specific, learn-able representations for vision as a sensory modality for systems that interact flexibly with uncontrolled environments and develops a grasp density for object detection in a novel scene.