VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation

@article{Hoque2020VisuoSpatialFF,
  title={VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation},
  author={Ryan Hoque and Daniel Seita and Ashwin Balakrishna and Aditya Ganapathi and Ajay Kumar Tanwani and Nawid Jamali and Katsu Yamane and Soshi Iba and Ken Goldberg},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.09044}
}
Robotic fabric manipulation has applications in home robotics, textiles, senior care and surgery. Existing fabric manipulation techniques, however, are designed for specific tasks, making it difficult to generalize across different but related tasks. We extend the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different fabric manipulation tasks with a single goal-conditioned policy. We introduce VisuoSpatial Foresight (VSF), which builds on… Expand
FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy
TLDR
FabricFlowNet is introduced, a cloth manipulation policy that leverages flow as both an input and as an action representation to improve performance and generalizes when trained on a single square cloth to other cloth shapes, such as T-shirts and rectangular cloths. Expand
Robotic Untangling and Disentangling of Cables via Learned Manipulation and Recovery Strategies
Robotic Untangling and Disentangling of Cables via Learned Manipulation and Recovery Strategies
Accelerating Surgical Robotics Research: A Review of 10 Years With the da Vinci Research Kit
Robotic-assisted surgery is now well-established in clinical practice and has become the gold standard clinical treatment option for several clinical indications. The field of robotic-assistedExpand
Accelerating Surgical Robotics Research: Reviewing 10 Years of Research with the dVRK
TLDR
An extensive review of the publications that have been facilitated by the da Vinci Research Kit over the past decade is presented and some of the major challenges and needs for the robotics community to maintain this initiative and build upon it are outlined. Expand
Bodies Uncovered: Learning to Manipulate Real Blankets Around People via Physics Simulations
TLDR
This work introduces a formulation for robotic bedding manipulation around people in which a robot uncovers a blanket from a target body part while ensuring the rest of the human body remains covered. Expand
Closing the Sim2Real Gap in Dynamic Cloth Manipulation
TLDR
A novel approach for solving dynamic cloth manipulation by training policies using reinforcement learning (RL) in simulation and transferring the learned policies to the real world in a zero-shot manner and using only visual feedback is enough for the policies to learn the dynamic manipulation task. Expand
Cloth Manipulation Planning on Basis of Mesh Representations with Incomplete Domain Knowledge and Voxel-to-Mesh Estimation
TLDR
Comparative experiments confirm that planning on basis of estimated meshes improves accuracy compared to voxel-based planning, and that epistemic uncertainty avoidance improves performance under conditions of incomplete domain knowledge. Expand
Comparing Reconstruction- and Contrastive-based Models for Visual Task Planning
TLDR
This work defines relevant evaluation metrics and performs a thorough study of different loss functions for state representation learning and shows that models exploiting task priors, such as Siamese networks with a simple contrastive loss, outperform reconstruction-based representations in visual task planning. Expand
Disentangling Dense Multi-Cable Knots
TLDR
An algorithm, Iterative Reduction Of Non-planar Multiple cAble kNots (IRON-MAN), that outputs robot actions to remove crossings from multi-cable knotted structures and is effective in disentangling knots involving up to three cables and generalizing to knot types that are not present during training. Expand
Disruption-Resistant Deformable Object Manipulation on Basis of Online Shape Estimation and Prediction-Driven Trajectory Correction
TLDR
This work proposes an approach that integrates online shape estimation, prediction of shape transitions, and mid-manipulation trajectory correction that applies to the problem of cloth folding, and demonstrates that the system can achieve good approximation of given goal states, even when the manipulation process is disrupted by cloth slipping or external interference. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 73 REFERENCES
Stochastic Variational Video Prediction
TLDR
This paper develops a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables, and is the first to provide effective Stochastic multi-frame prediction for real-world video. Expand
Deep visual foresight for planning robot motion
TLDR
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training. Expand
Continuous control with deep reinforcement learning
TLDR
This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs. Expand
The Cross-Entropy Method for Combinatorial and Continuous Optimization
We present a new and fast method, called the cross-entropy method, for finding the optimal solution of combinatorial and continuous nonconvex optimization problems with convex bounded domains. ToExpand
Pybullet, a python module for physics simulation for games, robotics and machine learning
  • 2019
RoboNet: Large-Scale Multi-Robot Learning
TLDR
This paper proposes RoboNet, an open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and studies how it can be used to learn generalizable models for vision-based robotic manipulation. Expand
Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
TLDR
It is demonstrated that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model. Expand
Adaptive anisotropic remeshing for cloth simulation
TLDR
This work presents a technique for cloth simulation that dynamically refines and coarsens triangle meshes so that they automatically conform to the geometric and dynamic detail of the simulated cloth, and introduces a novel technique for strain limiting by posing it as a nonlinear optimization problem. Expand
MuJoCo: A physics engine for model-based control
TLDR
A new physics engine tailored to model-based control, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers, which can compute both forward and inverse dynamics. Expand
...
1
2
3
4
5
...