Collision-Aware Target-Driven Object Grasping in Constrained Environments

@article{Lou2021CollisionAwareTO,
  title={Collision-Aware Target-Driven Object Grasping in Constrained Environments},
  author={Xibai Lou and Yang Yang and Changhyun Choi},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  pages={6364-6370}
}
  • Xibai Lou, Yang Yang, Changhyun Choi
  • Published 1 April 2021
  • Computer Science
  • 2021 IEEE International Conference on Robotics and Automation (ICRA)
Grasping a novel target object in constrained environments (e.g., walls, bins, and shelves) requires intensive reasoning about grasp pose reachability to avoid collisions with the surrounding structures. Typical 6-DoF robotic grasping systems rely on the prior knowledge about the environment and intensive planning computation, which is ungeneralizable and inefficient. In contrast, we propose a novel Collision-Aware Reachability Predictor (CARP) for 6-DoF grasping systems. The CARP learns to… 

Figures and Tables from this paper

Learning Pick to Place Objects using Self-supervised Learning with Minimal Training Resources
Grasping objects is a critical but challenging aspect of robotic manipulation. Recent studies have concentrated on complex architectures and large, well-labeled data sets that need extensive
Learning suction graspability considering grasp quality and robot reachability for bin-picking
TLDR
This study annotates the pixel-wise grasp quality and reachability by the proposed evaluation metric on synthesized images in a simulator to train an auto-encoder–decoder called suction graspability U- net++ (SG-U-Net++).

References

SHOWING 1-10 OF 38 REFERENCES
Knowledge Induced Deep Q-Network for a Slide-to-Wall Object Grasping
TLDR
This paper forms the Slide-to-Wall grasping problem as a Markov Decision Process and proposes a Knowledge Induced DQN (KI-DQN) that not only trains more effectively, but also outperforms the standard D QN significantly in testing cases with unseen walls, and can be directly tested on real robots without fine-tuning while DQn cannot.
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
TLDR
This work forms the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled graspts using a grasp evaluator model, trained purely in simulation and works in the real-world without any extra steps.
Learning Object Grasping for Soft Robot Hands
TLDR
The power of a 3D CNN model is exploited to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks.
A Deep Learning Approach to Grasping the Invisible
TLDR
The target-oriented motion critic, which maps both visual observations and target information to the expected future rewards of pushing and grasping motion primitives, is learned via deep Q-learning and the motion critic and the classifier are trained in a self-supervised manner through robot-environment interactions.
Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision
TLDR
The Task-Oriented Grasping Network (TOG-Net) is proposed to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool.
Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning
TLDR
This work demonstrates that it is possible to discover and learn complex synergies between non-prehensile and prehensile actions from scratch through model-free deep reinforcement learning, and achieves better grasping success rates and picking efficiencies than baseline alternatives after a few hours of training.
High precision grasp pose detection in dense clutter
TLDR
This paper proposes two new representations of grasp candidates, and quantifies the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models.
Workspace Aware Online Grasp Planning
This work provides a framework for a workspace aware online grasp planner. This framework greatly improves the performance of standard online grasp planning algorithms by incorporating a notion of
An overview of 3D object grasp synthesis algorithms
TLDR
This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands by focusing on analytical as well as empirical grasp synthesis approaches.
Grasp Pose Detection in Point Clouds
TLDR
A series of robotic experiments are reported that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter, an improvement in grasp detection performance.
...
1
2
3
4
...