Corpus ID: 221970052

Learning to Detect Objects with a 1 Megapixel Event Camera

@article{Perot2020LearningTD,
  title={Learning to Detect Objects with a 1 Megapixel Event Camera},
  author={E. Perot and Pierre de Tournemire and D. Nitti and Jonathan Masci and A. Sironi},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.13436}
}
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range. Thanks to these characteristics, event cameras are particularly suited for scenarios with high motion, challenging lighting conditions and requiring low latency. However, due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions. The main reasons for this performance gap are: the lower… Expand

Figures and Tables from this paper

DSEC: A Stereo Event Camera Dataset for Driving Scenarios
TLDR
This work presents the first high resolution, large scale stereo dataset with event cameras, DSEC, which contains 53 sequences collected by driving in a variety of illumination conditions and provides ground truth disparity for the development and evaluation of event-based stereo algorithms. Expand
Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks
TLDR
The method enables the training of sparse spiking convolutional neural networks directly on event data, using the popular deep learning framework PyTorch, and the performances in terms of accuracy, sparsity and training time make it possible to use this bio-inspired approach for the future embedding of real-time applications on low-power neuromorphic hardware. Expand
Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy
TLDR
This work approaches the intensity reconstruction problem from a self-supervised learning perspective, and combines estimated optical flow and the event-based photometric constancy to train neural networks without the need for any ground-truth or synthetic data. Expand
DVS-OUTLAB: A Neuromorphic Event-Based Long Time Monitoring Dataset for Real-World Outdoor Scenarios
Neuromorphic vision sensors are biologically inspired devices which differ fundamentally from well known frame-based sensors. Even though developments in this research area are increasing,Expand
Hardware-Algorithm Co-Design Enabling Efficient Event-based Object Detection
TLDR
A new technique is presented to improve the object detection performance of event-based cameras, and a demonstration of a $1.88% to $2.17% improvement in energy efficiency is demonstrated. Expand
Learning From Images: A Distillation Learning Framework for Event Cameras
  • Yongjian Deng, Hao Chen, Huiying Chen, Youfu Li
  • Computer Science, Medicine
  • IEEE Transactions on Image Processing
  • 2021
TLDR
This paper proposes a simple yet effective distillation learning framework, including multi-level customized knowledge distillation constraints, that can significantly boost the feature extraction process for event data and is applicable to various downstream tasks. Expand
TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes. They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamicExpand
v2e: From Video Frames to Realistic DVS Events
  • Yuhuang Hu, Shih-Chii Liu, T. Delbrück
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2021
To help meet the increasing need for dynamic vision sensor (DVS) event camera data, this paper proposes the v2e toolbox that generates realistic synthetic DVS events from intensity frames. It alsoExpand

References

SHOWING 1-10 OF 82 REFERENCES
Event-based Vision: A Survey
TLDR
This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. Expand
Video to Events: Recycling Video Datasets for Event Cameras
TLDR
This paper presents a method that addresses these needs by converting any existing video dataset recorded with conventional cameras to synthetic event data, which unlocks the use of a virtually unlimited number of existing video datasets for training networks designed for real event data. Expand
Video to Events: Bringing Modern Computer Vision Closer to Event Cameras
TLDR
This paper presents a method that addresses these needs by converting any existing video dataset recorded with conventional cameras to synthetic event data, which unlocks the use of a virtually unlimited number of existing video datasets for training networks designed for real event data. Expand
Learning an Event Sequence Embedding for Dense Event-Based Deep Stereo
TLDR
A new module for event sequence embedding is introduced, which is the first learning-based stereo method for an event-based camera and the only method that produces dense results on the Multi Vehicle Stereo Event Camera Dataset (MVSEC). Expand
Event-Based Vision Meets Deep Learning on Steering Prediction for Self-Driving Cars
TLDR
A deep neural network approach is presented that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle, and outperforms state-of-the-art algorithms based on standard cameras. Expand
EV-SegNet: Semantic Segmentation for Event-Based Cameras
  • Iñigo Alonso, A. C. Murillo
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2019
TLDR
This work builds a semantic segmentation CNN based on state-of-the-art techniques which takes event information as the only input and proposes a novel representation for DVS data that outperforms previously used event representations for related tasks. Expand
ESIM: an Open Event Camera Simulator
TLDR
This work presents the first event camera simulator that can generate a large amount of reliable event data, and releases an open source implementation of the simulator, which is a theoretically sound, adaptive rendering scheme that only samples frames when necessary. Expand
End-to-End Learning of Representations for Asynchronous Event-Based Data
TLDR
This work introduces a general framework to convert event streams into grid-based representations by means of strictly differentiable operations and lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones. Expand
HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification
TLDR
This paper introduces a novel event-based feature representation together with a new machine learning architecture that uses local memory units to efficiently leverage past temporal information and build a robust event- based representation and releases the first large real-world event- Based object classification dataset. Expand
The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception
TLDR
This letter presents a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments. Expand
...
1
2
3
4
5
...