Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics

@article{Mahler2017DexNet2D,
  title={Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics},
  author={Jeffrey Mahler and Jacky Liang and Sherdil Niyaz and Michael Laskey and Richard Doan and Xinyu Liu and Juan Aparicio Ojea and Ken Goldberg},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.09312}
}
To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the… 
Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds Using a New Analytic Model and Deep Learning
TLDR
A compliant suction contact model is proposed that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of thesuction grasp to resist an external gravity wrench.
On-Policy Dataset Synthesis for Learning Robot Grasping Policies Using Fully Convolutional Deep Networks
TLDR
A synthetic data sampling distribution is proposed that combines grasps sampled from the policy action set with guiding samples from a robust grasping supervisor that has full state knowledge to improve rate and reliability of the learned robot policy.
Learning robust, real-time, reactive robotic grasping
TLDR
A novel approach to perform object-independent grasp synthesis from depth images via deep neural networks overcomes shortcomings in existing techniques, namely discrete sampling of grasp candidates and long computation times, and achieves better performance, particularly in clutter.
6-DoF Grasp Planning using Fast 3D Reconstruction and Grasp Quality CNN
TLDR
A modification of LSM to graspable objects, evaluate the grasps, and develop a 6-DoF grasp planner based on Grasp-Quality CNN (GQ-CNN) that exploits multiple camera views to plan a robust grasp, even in the absence of a possible top-down grasp.
Grasping of Unknown Objects Using Deep Convolutional Neural Networks Based on Depth Images
TLDR
The approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera, and is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design.
GraspFusionNet: a two-stage multi-parameter grasp detection network based on RGB–XYZ fusion in dense clutter
TLDR
The results indicate that the proposed two-stage method achieves the best performance among other grasp detection algorithms, and the proposed method achieves average grasp success rate of 82.4%, which is also better than other state-of-the-art methods.
Grasp Planning by Optimizing a Deep Learning Scoring Function
Learning deep networks from large simulation datasets is a promising approach for robot grasping, but previous work has so far been limited to the simplified problem of overhead, parallel-jaw grasps.
Supplementary File for “Dex-Net 3.0: Computing Robust Robot Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning”
Our primary numeric metrics of performance were: 1) Average Precision (AP). The area under the precisionrecall curve, which measures precision over possible thresholds on the probability of success
Learn to Grasp with Less Supervision: A Data-Efficient Maximum Likelihood Grasp Sampling Loss
TLDR
A Maximum Likelihood Grasp Sampling Loss (MLGSL) is proposed to tackle the data sparsity issue and suggest that models based on MLGSL can learn to grasp with datasets composing of 2 labels per image, which implies that it is 8× more data-efficient than current state-of-the-art techniques.
REGNet: REgion-based Grasp Network for Single-shot Grasp Detection in Point Clouds
TLDR
An end-to-end single-shot grasp detection network taking one single-view point cloud as input for parallel grippers, which significantly outperforms several successful point-cloud based grasp detection methods including GPD, PointnetGPD, as well as S$4$G.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 76 REFERENCES
Dex-Net 1.0: A cloud-based network of 3D objects for robust grasp planning using a Multi-Armed Bandit model with correlated rewards
TLDR
The Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning, and reports on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction.
High precision grasp pose detection in dense clutter
TLDR
This paper proposes two new representations of grasp candidates, and quantifies the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models.
Deep learning a grasp function for grasping under gripper pose uncertainty
TLDR
A new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty, which trains a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image.
Leveraging big data for grasp planning
TLDR
A deep learning method is applied and it is shown that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression and suggest that labels based on the physics-metric are less noisy than those from the υ-metrics and therefore lead to a better classification performance.
Shape completion enabled robotic grasping
TLDR
This work provides an architecture to enable robotic grasp planning via shape completion through the use of a 3D convolutional neural network trained on a new open source dataset of over 440,000 3D exemplars captured from varying viewpoints.
Deep learning for detecting robotic grasps
TLDR
This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
Using Geometry to Detect Grasp Poses in 3D Point Clouds
TLDR
A set of necessary conditions on the geometry of a grasp that can be used to generate a set of grasp hypotheses are identified and shown, which helps focus grasp detection away from regions where no grasp can exist.
Large-scale supervised learning of the grasp robustness of surface patch pairs
TLDR
This work uses the BIDMach machine learning toolkit to compare the performance of two supervised learning methods: Random Forests and Deep Learning and finds that both learn to estimate grasp robustness fairly reliably in terms of Mean Absolute Error (MAE) and ROC Area Under Curve (AUC) on a held-out test set.
Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours
  • Lerrel Pinto, A. Gupta
  • Computer Science
    2016 IEEE International Conference on Robotics and Automation (ICRA)
  • 2016
TLDR
This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting.
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
TLDR
The approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing, and illustrates that data from different robots can be combined to learn more reliable and effective grasping.
...
1
2
3
4
5
...