Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping

@article{Lin2019UsingSD,
  title={Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping},
  author={Yunzhi Lin and Chao Tang and Fu-Jen Chu and Patricio A. Vela},
  journal={2020 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2019},
  pages={10494-10501}
}
  • Yunzhi LinChao Tang P. Vela
  • Published 12 September 2019
  • Computer Science
  • 2020 IEEE International Conference on Robotics and Automation (ICRA)
A segmentation-based architecture is proposed to decompose objects into multiple primitive shapes from monocular depth input for robotic manipulation. The backbone deep network is trained on synthetic data with 6 classes of primitive shapes generated by a simulation engine. Each primitive shape is designed with parametrized grasp families, permitting the pipeline to identify multiple grasp candidates per shape primitive region. The grasps are priority ordered via proposed ranking algorithm… 

Figures and Tables from this paper

Primitive Shape Recognition for Object Grasping

The outcomes support the hypothesis that explicitly encoding shape primitives within a grasping pipeline should boost grasping performance, including task-free and task-relevant grasp prediction.

GKNet: Grasp keypoint network for grasp candidates detection

GKNet outperforms reference baselines in static and dynamic grasping experiments while showing robustness to varied camera viewpoints and moderate clutter, and confirms the hypothesis that grasp keypoints are an effective output representation for deep grasp networks that provide robusts to expected nuisance factors.

Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis

An eye-in-hand learning-based approach for hand pre-shape classification from RGB sequences and shows that models trained with the synthetic dataset achieve better generalization performance than models trained on real data.

Recognizing Object Affordances to Support Scene Reasoning for Manipulation Tasks.

An affordance recognition pipeline based on a category-agnostic region proposal network for proposing instance regions of an image across categories and a self-attention mechanism trained to interpret each proposal learns to capture rich contextual dependencies through the region.

Learning-based Fast Path Planning in Complex Environments

A novel path planning algorithm to achieve fast path planning in complex environments by using a learning-based prediction module and a sampling-based path planning module that can achieve much better performance in terms of planning time, success rate, and path length.

Multi-view Fusion for Multi-level Robotic Scene Understanding

By developing and fusing recent techniques in these domains, this work provides a rich scene representation for robot awareness and demonstrates the importance of each of these modules, their complementary nature, and the potential benefits of the system in the context of robotic manipulation.

Where Shall I Touch? Vision-Guided Tactile Poking for Transparent Object Grasping

A novel framework of vision-guided tactile poking for transparent objects grasping is proposed, which could be adopted by other force or tactile sensors and could be used for grasping of other challenging objects.

Real-time synthetic-to-real human detection for robotics applications

The target of this work is to assess the generalization of the model trained on synthetic data, to real data, and also to explore the effect of using (few) real images in the training phase, to beneficially affect the performance of the synthetic-to-real real-time model.

Active Scene Understanding from Image Sequences for Next-Generation Computer Vision

This work creates a prototype computer vision system for acquiring a deeper scene understanding and shows that this approach is capable of solving many common shortcomings of traditional object recognition approaches including the understanding of 3D occlusion, the ability to aggregate information over many frames, theUnderstanding of object permanence or the active control of the camera in a beneficial way.

Review of Deep Reinforcement Learning-Based Object Grasping: Techniques, Open Challenges, and Recommendations

This comprehensive review of deep reinforcement learning in the manipulation field may be valuable for researchers and practitioners because they can expedite the establishment of important guidelines.

References

SHOWING 1-10 OF 47 REFERENCES

Domain Randomization and Generative Models for Robotic Grasping

A novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis and can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects.

Real-World Multiobject, Multigrasp Detection

A deep learning architecture is proposed to predict graspable locations for robotic manipulation by defining the learning problem to be classified with null hypothesis competition instead of regression, the deep neural network with red, green, blue and depth image input predicts multiple grasp candidates for a single object or multiple objects, in a single shot.

Selection of robot pre-grasps using box-based shape approximation

  • K. HuebnerD. Kragic
  • Computer Science
    2008 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2008
It is motivated how boxes as one of the simplest representations can be applied in a more sophisticated manner to generate task-dependent grasps.

Grasp planning for everyday objects based on primitive shape representation for parallel jaw grippers

  • N. YamanobeK. Nagata
  • Computer Science
    2010 IEEE International Conference on Robotics and Biomimetics
  • 2010
Seven kinds of shape primitives utilized for abstracting objects to be grasped are proposed for efficient grasp planning and an experimental result of the application of this shape primitive based grasp planning method to a mobile manipulator is shown.

Planning Multi-Fingered Grasps as Probabilistic Inference in a Learned Deep Network

This work is the first to directly plan high quality multifingered grasps in configuration space using a deep neural network without the need of an external planner and shows that the planning method outperforms existing planning methods for neural networks.

A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter

A deep learning approach is applied to solve the problem about grasping novel objects in clutter by proposing a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper.

On-Policy Dataset Synthesis for Learning Robot Grasping Policies Using Fully Convolutional Deep Networks

A synthetic data sampling distribution is proposed that combines grasps sampled from the policy action set with guiding samples from a robust grasping supervisor that has full state knowledge to improve rate and reliability of the learned robot policy.

Jacquard: A Large Scale Dataset for Robotic Grasp Detection

The results show that Jacquard enables much better generalization skills than a human labeled dataset thanks to its diversity of objects and grasping positions.

Data-Driven Grasp Synthesis—A Survey

A review of the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps and an overview of the different methodologies are provided, which draw a parallel to the classical approaches that rely on analytic formulations.

High precision grasp pose detection in dense clutter

This paper proposes two new representations of grasp candidates, and quantifies the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models.