VisuoSpatial Foresight for physical sequential fabric manipulation

@article{Hoque2022VisuoSpatialFF,
  title={VisuoSpatial Foresight for physical sequential fabric manipulation},
  author={Ryan Hoque and Daniel Seita and Ashwin Balakrishna and Aditya Ganapathi and Ajay Kumar Tanwani and Nawid Jamali and Katsu Yamane and Soshi Iba and Ken Goldberg},
  journal={Autonomous Robots},
  year={2022},
  volume={46},
  pages={175-199}
}
Robotic fabric manipulation has applications in home robotics, textiles, senior care and surgery. Existing fabric manipulation techniques, however, are designed for specific tasks, making it difficult to generalize across different but related tasks. We build upon the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different sequential fabric manipulation tasks with a single goal-conditioned policy. We extend our earlier work on VisuoSpatial… 
Bodies Uncovered: Learning to Manipulate Real Blankets Around People via Physics Simulations
TLDR
This work introduces a formulation for robotic bedding manipulation around people in which a robot uncovers a blanket from a target body part while ensuring the rest of the human body remains covered.
Safe Deep RL in 3D Environments using Human Feedback
TLDR
This paper uses ReQueST to train an agent to perform a 3D first-person object collection task using data entirely from human contractors, and shows that the resulting agent exhibits an order of magnitude reduction in unsafe behaviour compared to standard reinforcement learning.
Augment-Connect-Explore: a Paradigm for Visual Action Planning with Data Scarcity
TLDR
This work builds upon the Latent Space Roadmap (LSR) framework which performs planning with a graph built in a low dimensional latent space and proposes the Augment-Connect- Explore (ACE) paradigm to enable visual action planning in cases of data scarcity.
Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics
TLDR
A contact point discovery approach (CPDeform) that guides the stand-alone differentiable physics solver to deform various soft-body plasticines to overcome the local minima from initial contact points or contact switching.
C ONTACT P OINTS D ISCOVERY FOR S OFT -B ODY M A NIPULATIONS WITH D IFFERENTIABLE P HYSICS
TLDR
A contact point discovery approach (CPDeform) that guides the stand-alone differentiable physics solver to deform various soft-body plasticines to overcome the local minima from initial contact points or contact switching.
ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation
TLDR
ACID, an action-conditional visual dynamics model for volumetric deformable objects based on structured implicit neural representations, achieves the best performance in geometry, correspondence, and dynamics predictions over existing approaches.
RoboCraft: Learning to See, Simulate, and Shape Elasto-Plastic Objects with Graph Networks
TLDR
This work shows through experiments that with just 10 minutes of real-world robotic interaction data, the RoboCraft robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various target shapes, including shapes that the robot has never encountered before.
Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning
—Humans are capable of completing a range of challenging manipulation tasks that require reasoning jointly over modalities such as vision, touch, and sound. Moreover, many such tasks are

References

SHOWING 1-10 OF 94 REFERENCES
Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor
TLDR
In 180 physical experiments with the da Vinci Research Kit (dVRK) surgical robot, RGBD policies trained in simulation attain coverage of 83% to 95% depending on difficulty tier, suggesting that effective fabric smoothing policies can be learned from an algorithmic supervisor and that depth sensing is a valuable addition to color alone.
Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience
TLDR
This paper shows that it is possible to learn fabric folding skills in only an hour of self-supervised real robot experience, without human supervision or simulation, and creates an expressive goal-conditioned pick and place policy that can be trained efficiently with real world robot data only.
Deep Transfer Learning of Pick Points on Fabric for Robot Bed-Making
TLDR
This work considers the task of bed-making, where a robot sequentially grasps and pulls at pick points to increase blanket coverage, and suggests that transfer-invariant robot pick points on fabric can be effectively learned.
A geometric approach to robotic laundry folding
TLDR
An algorithm is presented which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers.
Cloth Manipulation Using Random-Forest-Based Imitation Learning
TLDR
This work uses a random-forest-based controller that maps the observed visual features of the cloth to an optimal control action of the manipulator and exhibits superior robustness to observation noise compared with other techniques such as convolutional neural networks and nearest neighbor searches.
Learning dexterous in-hand manipulation
TLDR
This work uses reinforcement learning (RL) to learn dexterous in-hand manipulation policies that can perform vision-based object reorientation on a physical Shadow Dexterous Hand, and these policies transfer to the physical robot despite being trained entirely in simulation.
Domain randomization for transferring deep neural networks from simulation to the real world
TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.
Dynamic Cloth Manipulation with Deep Reinforcement Learning
TLDR
A Deep Reinforcement Learning approach to solve dynamic cloth manipulation tasks, stressing that the followed trajectory has a decisive influence on the final state of cloth, which can greatly vary even if the positions reached by the grasped points are the same.
Learning Predictive Representations for Deformable Objects Using Contrastive Estimation
TLDR
This work proposes a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation and transfers its visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.
Combining imitation and reinforcement learning to fold deformable planar objects
TLDR
This paper proposes a new learning algorithm that combines imitation and reinforcement learning, and executes what it calls a momentum fold - a swinging motion that exploits the dynamics of the object being manipulated.
...
1
2
3
4
5
...