GPR: Grasp Pose Refinement Network for Cluttered Scenes

@article{Wei2021GPRGP,
  title={GPR: Grasp Pose Refinement Network for Cluttered Scenes},
  author={Wei Wei and Yongkang Luo and Fuyu Li and Guangyun Xu and Jun Zhong and Wanyi Li and Peng Wang},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  pages={4295-4302}
}
  • Wei Wei, Yongkang Luo, +4 authors Peng Wang
  • Published 18 May 2021
  • Computer Science
  • 2021 IEEE International Conference on Robotics and Automation (ICRA)
Object grasping in cluttered scenes is a widely investigated field of robot manipulation. Most of the current works focus on estimating grasp pose from point clouds based on an efficient single-shot grasp detection network. However, due to the lack of geometry awareness of the local grasping area, it may cause severe collisions and unstable grasp configurations. In this paper, we propose a two-stage grasp pose refinement network which detects grasps globally while fine-tuning low-quality grasps… 

Figures and Tables from this paper

Simultaneous Semantic and Collision Learning for 6-DoF Grasp Pose Estimation
TLDR
This work proposes to formalize the 6-DoF grasp pose estimation as a simultaneous multi-task learning problem and proposes a unified framework that jointly predict the feasible 6- doF grasp poses, instance semantic segmentation, and collision information.

References

SHOWING 1-10 OF 46 REFERENCES
Grasp Pose Detection in Point Clouds
TLDR
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance.
GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping
TLDR
This work contributes a large-scale grasp pose detection dataset with a unified evaluation system and proposes an end-to-end grasp pose prediction network given point cloud inputs, where the network learns approaching direction and operation parameters in a decoupled manner.
PointNetGPD: Detecting Grasp Configurations from Point Sets
TLDR
Experiments on object grasping and clutter removal show that the proposed PointNetGPD model generalizes well to novel objects and outperforms state-of-the-art methods.
PointNet++ Grasping: Learning An End-to-end Spatial Grasp Generation Algorithm from Sparse Point Clouds
TLDR
This paper proposes an end-to-end approach to directly predict the poses, categories and scores (qualities) of all the grasps and takes the whole sparse point clouds as the input and requires no sampling or search process.
ROI-based Robotic Grasp Detection for Object Overlapping Scenes
TLDR
A robotic grasp detection algorithm named ROI-GD is proposed to provide a feasible solution to this problem based on Region of Interest (ROI), which is the region proposal for objects.
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
TLDR
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps.
S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes
TLDR
This paper studies the problem of 6-DoF grasping by a parallel gripper in a cluttered scene captured using a commodity depth sensor from a single viewpoint and proposes a single-shot grasp proposal network, trained with synthetic data and tested in real-world scenarios.
Deep learning for detecting robotic grasps
TLDR
This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
TLDR
Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry.
Efficient grasping from RGBD images: Learning using a new rectangle representation
TLDR
This work proposes a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane that takes into account the location, the orientation as well as the gripper opening width and shows that this algorithm is successfully used to pick up a variety of novel objects.
...
1
2
3
4
5
...