• Corpus ID: 8051413

Learning Robotic Manipulation of Granular Media

@inproceedings{Schenck2017LearningRM,
  title={Learning Robotic Manipulation of Granular Media},
  author={Connor Schenck and Jonathan Tompson and Sergey Levine and Dieter Fox},
  booktitle={Conference on Robot Learning},
  year={2017}
}
In this paper, we examine the problem of robotic manipulation of granular media. [] Key Method Our best performing model is based on a highly-tailored convolutional network architecture with domain-specific optimizations, which we show accurately models the physical interaction of the robotic scoop with the underlying media. We empirically demonstrate that explicitly predicting physical mechanics results in a policy that out-performs both a hand-crafted dynamics baseline, and a "value-network", which must…

Figures from this paper

Manipulation of Granular Materials by Learning Particle Interactions

This letter proposes to use a graph-based representation to model the interaction dynamics of the material and rigid bodies manipulating it via message-passing, and proposes to minimise the Wasserstein distance between a predicted distribution of granular particles and their desired configuration.

Deep Scoop

The strength of using deep reinforcement learning techniques in the domain of robotic manipulation of granular media is investigated, with the best performing policies both being based on sampling actions with distributions based on a critic network, rather than directly using an actor network to select actions.

Learning to manipulate amorphous materials

A method of training character manipulation of amorphous materials such as those often used in cooking by using inverse kinematics to guide a character's arm and hand to match the motion of the manipulation tool such as a knife or a frying pan is presented.

Learning compositional models of robot skills for task and motion planning

This work uses Gaussian process methods for learning the constraints on skill effectiveness from small numbers of expensive-to-collect training examples and develops efficient adaptive sampling methods for generating a comprehensive and diverse sequence of continuous candidate control parameter values during planning.

A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms

A formalization of the robot manipulation learning problem is described that synthesizes existing research into a single coherent framework and highlights the many remaining research opportunities and challenges.

ToolFlowNet: Robotic Manipulation with Tools via Predicting Tool Flow from Point Clouds

This paper proposes a novel framework for learning policies from point clouds for robotic manipulation with tools, using a novel neural network, ToolFlowNet, which predicts dense per-point flow on the tool that the robot controls, and then uses the flow to derive the transformation that the machine should execute.

Few-shot Adaptation for Manipulating Granular Materials Under Domain Shift

This paper proposes an adaptive scooping strategy that uses deep Gaussian process method trained with meta-learning to learn on-line from very limited experience on the target terrains, significantly outperforming non-adaptive methods proposed in the excavation literature as well as other state-of-the-art meta- learning methods.

Active Model Learning and Diverse Action Sampling for Task and Motion Planning

This work uses Gaussian process methods for learning the conditions of operator effectiveness from small numbers of expensive training examples collected by experimentation on a robot, and develops adaptive sampling methods for generating diverse elements of continuous sets during planning for solving a new task, so that planning is as efficient as possible.

Model-free vision-based shaping of deformable plastic materials

It is shown that it is possible to obtain simple shapes with the kinetic sand, without explicitly modeling the material, and a richer set of action types and multi-step reasoning is needed to achieve more sophisticated shapes.

Inferring the Material Properties of Granular Media for Robotic Tasks

This work presents a software and hardware framework that automatically calibrates a fast physics simulator to accurately simulate granular materials by inferring material properties from real-world depth images of granular formations (i.e., piles and rings).

References

SHOWING 1-10 OF 33 REFERENCES

Deep visual foresight for planning robot motion

  • Chelsea FinnS. Levine
  • Computer Science
    2017 IEEE International Conference on Robotics and Automation (ICRA)
  • 2017
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training.

A Compositional Object-Based Approach to Learning Physical Dynamics

The NPE's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass.

Towards Learning to Perceive and Reason About Liquids

This paper applies fully-convolutional deep neural networks to the tasks of detecting and tracking liquids and shows that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks.

Learning to Poke by Poking: Experiential Learning of Intuitive Physics

A novel approach based on deep neural networks is proposed for modeling the dynamics of robot's interactions directly from images, by jointly estimating forward and inverse models of dynamics.

Accelerating Eulerian Fluid Simulation With Convolutional Networks

This work proposes a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations of the Navier-Stokes equations.

End-to-End Training of Deep Visuomotor Policies

This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method.

Towards Adapting Deep Visuomotor Representations from Simulated to Real Environments

This work proposes a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search.

Neural networks and differential dynamic programming for reinforcement learning problems

This paper extends neural networks for modeling prediction error and output noise, computing an output probability distribution for a given input distribution, and computing gradients of output expectation with respect to an input, and provides an analytic solution for these extensions.

Learning to Navigate in Complex Environments

This work considers jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks and shows that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs.

Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates

It is demonstrated that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.