GPR: Grasp Pose Refinement Network for Cluttered Scenes

  title={GPR: Grasp Pose Refinement Network for Cluttered Scenes},
  author={Wei Wei and Yongkang Luo and Fuyu Li and Guangyun Xu and Jun Zhong and Wanyi Li and Peng Wang},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  • Wei Wei, Yongkang Luo, Peng Wang
  • Published 18 May 2021
  • Computer Science
  • 2021 IEEE International Conference on Robotics and Automation (ICRA)
Object grasping in cluttered scenes is a widely investigated field of robot manipulation. Most of the current works focus on estimating grasp pose from point clouds based on an efficient single-shot grasp detection network. However, due to the lack of geometry awareness of the local grasping area, it may cause severe collisions and unstable grasp configurations. In this paper, we propose a two-stage grasp pose refinement network which detects grasps globally while fine-tuning low-quality grasps… 

Figures and Tables from this paper

Robotic Grasping from Classical to Modern: A Survey
This paper surveys the advances of robotic grasping, starting from the classical formulations and solutions to the modern ones, and discusses the open problems and the future research directions that may be important for the human-level robustness, autonomy, and intelligence of robots.
Dexterous Manipulation for Multi-Fingered Robotic Hands With Reinforcement Learning: A Review
The purpose is to present a comprehensive review of the techniques for dexterous manipulation with multi-fingered robotic hands, such as the model-based approach without learning in early years, and the latest research and methodologies focused on the method based on reinforcement learning and its variations.
Simultaneous Semantic and Collision Learning for 6-DoF Grasp Pose Estimation
This work proposes to formalize the 6-DoF grasp pose estimation as a simultaneous multi-task learning problem and proposes a unified framework that jointly predict the feasible 6- doF grasp poses, instance semantic segmentation, and collision information.


Grasp Pose Detection in Point Clouds
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance.
GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping
This work contributes a large-scale grasp pose detection dataset with a unified evaluation system and proposes an end-to-end grasp pose prediction network given point cloud inputs, where the network learns approaching direction and operation parameters in a decoupled manner.
PointNetGPD: Detecting Grasp Configurations from Point Sets
Experiments on object grasping and clutter removal show that the proposed PointNetGPD model generalizes well to novel objects and outperforms state-of-the-art methods.
PointNet++ Grasping: Learning An End-to-end Spatial Grasp Generation Algorithm from Sparse Point Clouds
This paper proposes an end-to-end approach to directly predict the poses, categories and scores (qualities) of all the grasps and takes the whole sparse point clouds as the input and requires no sampling or search process.
ROI-based Robotic Grasp Detection for Object Overlapping Scenes
A robotic grasp detection algorithm named ROI-GD is proposed to provide a feasible solution to this problem based on Region of Interest (ROI), which is the region proposal for objects.
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps.
S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes
This paper studies the problem of 6-DoF grasping by a parallel gripper in a cluttered scene captured using a commodity depth sensor from a single viewpoint and proposes a single-shot grasp proposal network, trained with synthetic data and tested in real-world scenarios.
Deep learning for detecting robotic grasps
This work presents a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second, and shows that this method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry.
Efficient grasping from RGBD images: Learning using a new rectangle representation
This work proposes a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane that takes into account the location, the orientation as well as the gripper opening width and shows that this algorithm is successfully used to pick up a variety of novel objects.