VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation

@article{Hoque2020VisuoSpatialFF,
  title={VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation},
  author={Ryan Hoque and Daniel Seita and Ashwin Balakrishna and Aditya Ganapathi and Ajay Kumar Tanwani and Nawid Jamali and Katsu Yamane and Soshi Iba and Ken Goldberg},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.09044}
}
Robotic fabric manipulation has applications in home robotics, textiles, senior care and surgery. Existing fabric manipulation techniques, however, are designed for specific tasks, making it difficult to generalize across different but related tasks. We extend the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different fabric manipulation tasks with a single goal-conditioned policy. We introduce VisuoSpatial Foresight (VSF), which builds on… 

VisuoSpatial Foresight for physical sequential fabric manipulation

This work builds upon the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different sequential fabric manipulation tasks with a single goal-conditioned policy and suggests that training visual dynamics models using longer, corner-based actions can improve the efficiency of fabric folding.

Learning Visible Connectivity Dynamics for Cloth Smoothing

This work proposes to learn a particle-based dynamics model from a partial point cloud observation to overcome the challenges of partial observability, and shows that the method greatly outperforms previous state-of-the-art model-based and model-free reinforcement learning methods in simulation.

VisuoSpatial Foresight for physical sequential fabric manipulation

Results suggest that training visual dynamics models using longer, corner-based actions can improve the efficiency of fabric folding by 76% and enable a physical sequential fabric folding task that VSF could not previously perform with 90% reliability.

Foldsformer: Learning Sequential Multi-Step Cloth Manipulation With Space-Time Attention

Foldsformer can complete multi-step cloth manipulation tasks even when configurations of the cloth vary from configurations in the general demonstrations, and can be transferred from simulation to the real world without additional training or domain randomization.

Knowledge Representation to Enable High-level Planning in Cloth Manipulation Tasks

A generic, compact and simplified representation of the states of cloth manipulation that allows for representing tasks as sequences of states and transitions semantically, and defines a Cloth Manipulation Graph that encodes all the strategies to accomplish a task.

Randomized-to-Canonical Model Predictive Control for Real-World Visual Robotic Manipulation

The experimental results show that KRC-MPC can be applied to various real domains and tasks in a zero-shot manner and is evaluated through a valve rotation task by a robot hand in both simulation and the real world.

Mesh-based Dynamics with Occlusion Reasoning for Cloth Manipulation

This work builds a system that uses explicit occlusion reasoning to unfold a crumpled cloth and first learns a model to reconstruct the mesh of the cloth, which allows it to use a mesh-based dynamics model for planning while reasoning about occlusions.

Transporters with Visual Foresight for Solving Unseen Rearrangement Tasks

A visual foresight model for pick-and-place rearrangement manipulation which is able to learn efficiently and develop a multi-modal action proposal module which builds on the Goal-Conditioned Transporter Network, a state-of-the-art imitation learning method.

FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy

This work introduces FabricFlowNet (FFN), a cloth manipulation policy that leverages optical flow as both an input and as an action representation to improve performance and shows that it outperforms state-of-the-art model-free and model-based cloth manipulation policies.

SpeedFolding: Learning Efficient Bimanual Folding of Garments

SpeedFolding is developed, a reliable and efficient bimanual system, which given user-defined instructions as folding lines, manipulates an initially crumpled garment to (1) a smoothed and (2) a folded configuration.
...

References

SHOWING 1-10 OF 70 REFERENCES

Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control

It is demonstrated that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model.

Stochastic Variational Video Prediction

This paper develops a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables, and is the first to provide effective Stochastic multi-frame prediction for real-world video.

Deep visual foresight for planning robot motion

  • Chelsea FinnS. Levine
  • Computer Science
    2017 IEEE International Conference on Robotics and Automation (ICRA)
  • 2017
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training.

Continuous control with deep reinforcement learning

This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

The Cross-Entropy Method for Combinatorial and Continuous Optimization

The mode of a unimodal importance sampling distribution, like the mode of beta distribution, is used as an estimate of the optimal solution for continuous optimization and Markov chains approach for combinatorial optimization.

RoboNet: Large-Scale Multi-Robot Learning

This paper proposes RoboNet, an open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and studies how it can be used to learn generalizable models for vision-based robotic manipulation.

MuJoCo: A physics engine for model-based control

A new physics engine tailored to model-based control, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers, which can compute both forward and inverse dynamics.

Adaptive anisotropic remeshing for cloth simulation

This work presents a technique for cloth simulation that dynamically refines and coarsens triangle meshes so that they automatically conform to the geometric and dynamic detail of the simulated cloth, and introduces a novel technique for strain limiting by posing it as a nonlinear optimization problem.

Pybullet, a python module for physics simulation for games, robotics and machine learning

  • 2019
...