Collision-Aware Target-Driven Object Grasping in Constrained Environments

  title={Collision-Aware Target-Driven Object Grasping in Constrained Environments},
  author={Xibai Lou and Yang Yang and Changhyun Choi},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
Grasping a novel target object in constrained environments (e.g., walls, bins, and shelves) requires intensive reasoning about grasp pose reachability to avoid collisions with the surrounding structures. Typical 6-DoF robotic grasping systems rely on the prior knowledge about the environment and intensive planning computation, which is ungeneralizable and inefficient. In contrast, we propose a novel Collision-Aware Reachability Predictor (CARP) for 6-DoF grasping systems. The CARP learns to… 

Figures and Tables from this paper

Learning Object Relations with Graph Neural Networks for Target-Driven Grasping in Dense Clutter
A target-driven grasping system that simultaneously considers object relations and predicts 6-DoF grasp poses is proposed and a shape completion-assisted grasp pose sampling method is developed that improves sample quality and consequently grasping efficiency.
Learning Pick to Place Objects using Self-supervised Learning with Minimal Training Resources
A deep Q-network, a model-free deep reinforcement learning method for robotic grasping, is employed in this paper and the experimental outcomes indicate that the approach successfully grasps objects with consuming minimal time and computer resources.
Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking
This study annotates the pixel-wise grasp quality and reachability by the proposed evaluation metric on synthesized images in a simulator to train an auto-encoder–decoder called suction graspability U- net++ (SG-U-Net++).
Interactive Robotic Grasping with Attribute-Guided Disambiguation
This paper investigates the use of object attributes in disambiguation and develops an interactive grasping system capable of effectively resolving ambiguities via dialogues, and proposes an attribute-guided formulation of the partially observable Markov decision process (Attr-POMDP) for disambigsuation.
Cluttered Food Grasping with Adaptive Fingers and Synthetic-Data Trained Object Detection
This work proposes a method that trains purely on synthetic data and successfully transfers to the real world using sim2real methods by creating datasets of filled food trays using high-quality 3d models of real pieces of food for the training instance segmentation models.


Knowledge Induced Deep Q-Network for a Slide-to-Wall Object Grasping
This paper forms the Slide-to-Wall grasping problem as a Markov Decision Process and proposes a Knowledge Induced DQN (KI-DQN) that not only trains more effectively, but also outperforms the standard D QN significantly in testing cases with unseen walls, and can be directly tested on real robots without fine-tuning while DQn cannot.
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps.
Learning Object Grasping for Soft Robot Hands
The power of a 3D CNN model is exploited to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks.
A Deep Learning Approach to Grasping the Invisible
The target-oriented motion critic, which maps both visual observations and target information to the expected future rewards of pushing and grasping motion primitives, is learned via deep Q-learning and the motion critic and the classifier are trained in a self-supervised manner through robot-environment interactions.
Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision
The Task-Oriented Grasping Network (TOG-Net) is proposed to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool.
Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning
This work demonstrates that it is possible to discover and learn complex synergies between non-prehensile and prehensile actions from scratch through model-free deep reinforcement learning, and achieves better grasping success rates and picking efficiencies than baseline alternatives after a few hours of training.
High precision grasp pose detection in dense clutter
This paper proposes two new representations of grasp candidates, and quantifies the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models.
Workspace Aware Online Grasp Planning
This work provides a framework for a workspace aware online grasp planner. This framework greatly improves the performance of standard online grasp planning algorithms by incorporating a notion of
An overview of 3D object grasp synthesis algorithms
Grasp Pose Detection in Point Clouds
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance.