• Corpus ID: 216562390

Event-based Robotic Grasping Detection with Neuromorphic Vision Sensor and Event-Stream Dataset

@article{Li2020EventbasedRG,
  title={Event-based Robotic Grasping Detection with Neuromorphic Vision Sensor and Event-Stream Dataset},
  author={Bin Li and Hu Cao and Zhongnan Qu and Yingbai Hu and Zhenke Wang and Zichen Liang},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.13652}
}
Robotic grasping plays an important role in the field of robotics. The current state-of-the-art robotic grasping detection systems are usually built on the conventional vision, such as RGB-D camera. Compared to traditional frame-based computer vision, neuromorphic vision is a small and young community of research. Currently, there are limited event-based datasets due to the troublesome annotation of the asynchronous event stream. Annotating large scale vision dataset often takes lots of… 

Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes

TLDR
A keypoint-based scheme to solve the problem of grasp detection that can help robots grasp the target in single-object and multi-object scenes with overall success rates of 94% and 87%, respectively.

Failure Handling of Robotic Pick and Place Tasks With Multimodal Cues Under Partial Object Occlusion

TLDR
This work proposes a hybrid policy by combining visual cues and proprioception of the authors' gripper for the effective failure detection and recovery in grasping, especially using a proprioceptive self-developed soft robotic gripper that is capable of contact sensing.

References

SHOWING 1-10 OF 26 REFERENCES

Multi-Object Grasping Detection With Hierarchical Feature Fusion

TLDR
A novel grasp detection algorithm termed as multi-object grasping detection network, which can utilize hierarchical features to learn object detector and grasping pose estimator simultaneously simultaneously is presented.

Robotic grasp detection using deep convolutional neural networks

TLDR
A novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest.

Low-latency visual odometry using event-based feature tracks

TLDR
This paper presents a low-latency visual odometry algorithm for the DAVIS sensor using event-based feature tracks that tightly interleaves robust pose optimization and probabilistic mapping and shows that the method successfully tracks the 6-DOF motion of the sensor in natural scenes.

Combined frame- and event-based detection and tracking

This paper reports an object tracking algorithm for a moving platform using the dynamic and active-pixel vision sensor (DAVIS). It takes advantage of both the active pixel sensor (APS) frame and

Vision meets robotics: The KITTI dataset

TLDR
A novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research, using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras and a high-precision GPS/IMU inertial navigation system.

Low-latency localization by active LED markers tracking using a dynamic vision sensor

TLDR
A method for low-latency pose tracking using a DVS and Active Led Markers, which are LEDs blinking at high frequency (>1 KHz), which is compared to traditional pose tracking based on a CMOS camera.

Deep Grasp: Detection and Localization of Grasps with Deep Neural Networks

TLDR
A deep learning architecture is proposed to predict graspable locations for robotic manipulation by transforming grasp configuration regression into classification problem with null hypothesis competition, and the deep neural network with RGB-D image input predicts multiple grasp candidates on a single unseen object, as well as predict grasping candidates on multiple novel objects in a single shot.

Fast Event-based Corner Detection

TLDR
This work proposes a method to reduce an event stream to a corner event stream, which is capable of pro- cessing millions of events per second on a single core and reduces the event rate by a factor of 10 to 20.

Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images

TLDR
An efficient neural network model to generate robotic grasps with high resolution images that first down-sample the images to get features and then up-sample those features to the original size of the input as well as combines local and global features from different feature maps.

Efficient grasping from RGBD images: Learning using a new rectangle representation

TLDR
This work proposes a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane that takes into account the location, the orientation as well as the gripper opening width and shows that this algorithm is successfully used to pick up a variety of novel objects.