Transferring Experience from Simulation to the Real World for Precise Pick-And-Place Tasks in Highly Cluttered Scenes

@article{Kleeberger2020TransferringEF,
  title={Transferring Experience from Simulation to the Real World for Precise Pick-And-Place Tasks in Highly Cluttered Scenes},
  author={Kilian Kleeberger and Markus V{\"o}lk and Marius Moosmann and Erik Thiessenhusen and Florian Roth and Richard Bormann and Marco F. Huber},
  journal={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2020},
  pages={9681-9688}
}
In this paper, we introduce a novel learning-based approach for grasping known rigid objects in highly cluttered scenes and precisely placing them based on depth images. Our Placement Quality Network (PQ-Net) estimates the object pose and the quality for each automatically generated grasp pose for multiple objects simultaneously at 92 fps in a single forward pass of a neural network. All grasping and placement trials are executed in a physics simulation and the gained experience is transferred… 

Figures and Tables from this paper

Precise Object Placement with Pose Distance Estimations for Different Objects and Grippers
TLDR
By incorporating model knowledge into the system, this approach has higher success rates for grasping than state-of-the-art model-free approaches and chooses grasps that result in significantly more precise object placements than prior model-based work.
Automatic Grasp Pose Generation for Parallel Jaw Grippers
TLDR
A novel approach for the automatic offline grasp pose synthesis on known rigid objects for parallel jaw grippers using several criteria such as gripper stroke, surface friction, and a collision check to determine suitable 6D grasp poses on an object.
Investigations on Output Parameterizations of Neural Networks for Single Shot 6D Object Pose Estimation
TLDR
This work proposes different novel parameterizations for the output of the neural network for single shot 6D object pose estimation and achieves state-of-the-art performance on two public benchmark datasets and demonstrates that the pose estimates can be used for real-world robotic grasping tasks without additional ICP refinement.
Cluttered Food Grasping with Adaptive Fingers and Synthetic-Data Trained Object Detection
TLDR
This work proposes a method that trains purely on synthetic data and successfully transfers to the real world using sim2real methods by creating datasets of filled food trays using high-quality 3d models of real pieces of food for the training instance segmentation models.

References

SHOWING 1-10 OF 46 REFERENCES
Learning robust, real-time, reactive robotic grasping
TLDR
A novel approach to perform object-independent grasp synthesis from depth images via deep neural networks overcomes shortcomings in existing techniques, namely discrete sampling of grasp candidates and long computation times, and achieves better performance, particularly in clutter.
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
TLDR
This work study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images, including a novel extension of pixel-level domain adaptation that is term the GraspGAN.
Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
TLDR
The proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel, overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times.
Domain randomization for transferring deep neural networks from simulation to the real world
TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.
A Survey on Learning-Based Robotic Grasping
TLDR
This review provides a comprehensive overview of machine learning approaches for vision-based robotic grasping and manipulation and gives an overview of techniques and achievements in transfers from simulations to the real world.
Learning Based Robotic Bin-picking for Potentially Tangled Objects
TLDR
In this research, a method for avoiding the situation where a robot picks multiple objects by using a Convolutional Neural Network to predict whether or not the robot can pick one and only one object from the bin of tangled objects is proposed.
Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
TLDR
This Multi-View Picking (MVP) controller uses an active perception approach to choose informative viewpoints based directly on a distribution of grasp pose estimates in real time, reducing uncertainty in the grasp poses caused by clutter and occlusions.
Real-time grasp detection using convolutional neural networks
TLDR
An accurate, real-time approach to robotic grasp detection based on convolutional neural networks that outperforms state-of-the-art approaches by 14 percentage points and runs at 13 frames per second on a GPU.
Deep learning for detecting robotic grasps
TLDR
This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
Improving Data Efficiency of Self-supervised Learning for Robotic Grasping
TLDR
A learning algorithm from an applied point of view is derived to significantly reduce the amount of required training data and predict grasp and gripper parameters with great advantages in training as well as inference performance.
...
1
2
3
4
5
...