6-DOF GraspNet: Variational Grasp Generation for Object Manipulation

@article{Mousavian20196DOFGV,
  title={6-DOF GraspNet: Variational Grasp Generation for Object Manipulation},
  author={Arsalan Mousavian and Clemens Eppner and Dieter Fox},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={2901-2910}
}
Generating grasp poses is a crucial component for any robot object manipulation task. [...] Key Method Both Grasp Sampler and Grasp Refinement networks take 3D point clouds observed by a depth camera as input. We evaluate our approach in simulation and real-world robot experiments. Our approach achieves 88\% success rate on various commonly used objects with diverse appearances, scales, and weights. Our model is trained purely in simulation and works in the real world without any extra steps. The video of our…Expand
Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes
TLDR
This work proposes an end-to-end network that efficiently generates a distribution of 6-DoF parallel-jaw grasps directly from a depth recording of a scene and treats 3D points of the recorded point cloud as potential grasp contacts, and reduces the dimensionality of the grasp representation to 4- doF which greatly facilitates the learning process.
GraspVDN: scene-oriented grasp estimation by learning vector representations of grasps
  • Zhipeng Dong, Hongkun Tian, Xuefeng Bao, Yunhui Yan, Fei Chen
  • Computer Science
    Complex & Intelligent Systems
  • 2021
TLDR
This work presents a scene-oriented grasp estimation scheme taking constraints of the grasp pose imposed by the environment into consideration and training on samples satisfying the constraints and archived comparable performance as state-of-the-art while being efficient in runtime.
Human Initiated Grasp Space Exploration Algorithm for an Underactuated Robot Gripper Using Variational Autoencoder
TLDR
This article presents an efficient procedure for exploring the grasp space of a multifingered adaptive gripper for generating reliable grasps given a known object pose and reaches a grasp success rate of 99.91% on 7000 trials.
Learning an end-to-end spatial grasp generation and refinement algorithm from simulation
TLDR
The algorithm takes the whole sparse point clouds as the input and requires no sampling or search process to predict poses, categories and scores (qualities) based on a SPH3D-GCN network.
6-DoF Contrastive Grasp Proposal Network
TLDR
A 6-DoF contrastive grasp proposal network (CGPN) to infer 6- doF grasps from a single-view depth image and is trained offline with synthetic grasp data to improve the robustness in reality and bridge the simulation-to-real gap.
A Geometric Approach for Grasping Unknown Objects With Multifingered Hands
TLDR
This article proposes a method for grasping unknown objects from cluttered scenes using a noisy point cloud as an input using a shape complementarity metric and proposes an optimization-based refinement of the hand poses and finger configurations to achieve a power grasp of the target object.
Robot Learning of 6 DoF Grasping using Model-based Adaptive Primitives
TLDR
This work parametrizing the two remaining, lateral degrees of freedom of the primitives is applied to the task of 6 DoF bin picking, introducing a model-based controller to calculate angles that avoid collisions, maximize the grasp quality while keeping the uncertainty small.
Automatic Grasp Pose Generation for Parallel Jaw Grippers
TLDR
A novel approach for the automatic offline grasp pose synthesis on known rigid objects for parallel jaw grippers using several criteria such as gripper stroke, surface friction, and a collision check to determine suitable 6D grasp poses on an object.
6-DoF Grasp Planning using Fast 3D Reconstruction and Grasp Quality CNN
TLDR
A modification of LSM to graspable objects, evaluate the grasps, and develop a 6-DoF grasp planner based on Grasp-Quality CNN (GQ-CNN) that exploits multiple camera views to plan a robust grasp, even in the absence of a possible top-down grasp.
Grasp Proposal Networks: An End-to-End Solution for Visual Learning of Robotic Grasps
TLDR
This work proposes a novel, end-to-end grasp proposal network, GPNet, to predict a diverse set of 6-DOF grasps for an unseen object observed from a single and unknown camera view, and results show the advantage of the methods over existing ones.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 44 REFERENCES
Grasp Planning by Optimizing a Deep Learning Scoring Function
Learning deep networks from large simulation datasets is a promising approach for robot grasping, but previous work has so far been limited to the simplified problem of overhead, parallel-jaw grasps.
Template-based learning of grasp selection
TLDR
A new grasp selection algorithm able to find object grasp poses based on previously demonstrated grasps able to improve over time using the information of previous grasp attempts to adapt the ranking of the templates.
PointNetGPD: Detecting Grasp Configurations from Point Sets
TLDR
Experiments on object grasping and clutter removal show that the proposed PointNetGPD model generalizes well to novel objects and outperforms state-of-the-art methods.
Generating Grasp Poses for a High-DOF Gripper Using Neural Networks
TLDR
The method is robust and can handle noisy object models such as those constructed from multi-view depth images, allowing the method to be implemented on a 25-DOF Shadow Hand hardware platform.
Learning Grasp Strategies with Partial Shape Information
TLDR
An approach to grasping is proposed that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor.
Learning 6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations
TLDR
A deep geometry-aware grasping network (DGGN) that decomposes the learning into two steps, which constraining and regularizing grasping interaction learning through 3D geometry prediction and demonstrating that the model generalizes to novel viewpoints and object instances.
Learning Object Grasping for Soft Robot Hands
TLDR
The power of a 3D CNN model is exploited to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks.
Grasp Pose Detection in Point Clouds
TLDR
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance.
Grasping of Unknown Objects Using Deep Convolutional Neural Networks Based on Depth Images
TLDR
The approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera, and is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design.
Planning Multi-Fingered Grasps as Probabilistic Inference in a Learned Deep Network
TLDR
This work is the first to directly plan high quality multifingered grasps in configuration space using a deep neural network without the need of an external planner and shows that the planning method outperforms existing planning methods for neural networks.
...
1
2
3
4
5
...