• Corpus ID: 243938462

FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy

@inproceedings{Weng2021FabricFlowNetBC,
  title={FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy},
  author={Thomas Weng and Sujay Bajracharya and Yufei Wang and Khush Agrawal and David Held},
  booktitle={CoRL},
  year={2021}
}
We address the problem of goal-directed cloth manipulation, a challenging task due to the deformability of cloth. Our insight is that optical flow, a technique normally used for motion estimation in video, can also provide an effective representation for corresponding cloth poses across observation and goal images. We introduce FabricFlowNet (FFN), a cloth manipulation policy that leverages flow as both an input and as an action representation to improve performance. FabricFlowNet also… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 47 REFERENCES
EMD Net: An Encode–Manipulate–Decode Network for Cloth Manipulation
TLDR
The method directly connects cloth manipulations with shape changes of the cloth, by means of a deep neural network, and was confirmed by simulation and real robot experiments.
Dynamic Cloth Manipulation with Deep Reinforcement Learning
TLDR
A Deep Reinforcement Learning approach to solve dynamic cloth manipulation tasks, stressing that the followed trajectory has a decisive influence on the final state of cloth, which can greatly vary even if the positions reached by the grasped points are the same.
Bimanual robotic cloth manipulation for laundry folding
TLDR
A system that is capable of fully autonomously transforming a clothing item from a random crumpled configuration into a folded state is presented and a method to compute valid grasp poses on the cloth which accounts for deformability is described.
Benchmarking Bimanual Cloth Manipulation
TLDR
This letter provides three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing.
Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor
TLDR
In 180 physical experiments with the da Vinci Research Kit (dVRK) surgical robot, RGBD policies trained in simulation attain coverage of 83% to 95% depending on difficulty tier, suggesting that effective fabric smoothing policies can be learned from an algorithmic supervisor and that depth sensing is a valuable addition to color alone.
A geometric approach to robotic laundry folding
TLDR
An algorithm is presented which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers.
Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience
TLDR
This paper shows that it is possible to learn fabric folding skills in only an hour of self-supervised real robot experience, without human supervision or simulation, and creates an expressive goal-conditioned pick and place policy that can be trained efficiently with real world robot data only.
Combining self-supervised learning and imitation for vision-based rope manipulation
TLDR
It is shown that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.
A Grasping-Centered Analysis for Cloth Manipulation
TLDR
A novel definition of textile object grasps is proposed that abstracts from the robotic embodiment or hand shape and recovers concepts from the early neuroscience literature on hand prehension skills and provides a classification of cloth manipulation primitives.
Learning Latent Graph Dynamics for Deformable Object Manipulation
TLDR
This work aims to learn latent Graph dynamics for DefOrmable Object Manipulation (GDOOM), which approximates a deformable object as a sparse set of interacting keypoints and learns a graph neural network that captures abstractly the geometry and interaction dynamics of the keypoints.
...
1
2
3
4
5
...