Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for Event-Based Vision

@inproceedings{Kugele2021HybridSE,
  title={Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for Event-Based Vision},
  author={Alexander Kugele and Thomas Pfeil and Michael Pfeiffer and Elisabetta Chicca},
  booktitle={German Conference on Pattern Recognition},
  year={2021}
}
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than full image frames and yield sparse, energy-efficient encodings of scenes, in addition to low latency, high dynamic range, and lack of motion blur. Recent progress in object recognition from event-based sensors has come from conversions of successful deep neural network architectures, which are trained with backpropagation. However, using these approaches for event streams requires a… 

Spike-Event Object Detection for Neuromorphic Vision

A deep SNN method achieved by the conversion of successful convolution neural networks but trained by event images, which has higher accuracy than existing SNN methods, and better energy efficiency and lower energy consumption than existing CNN methods is proposed.

Object Detection with Spiking Neural Networks on Automotive Event Data

This work took advantage of the latest advancements in matter of spike backpropagation - surrogate gradient learning, parametric LIF, SpikingJelly framework - and of the new voxel cube event encoding to train 4 different SNNs based on popular deep learning networks: SqueezeNet, VGG, MobileNet, and DenseNet, which managed to increase the size and the complexity of SNN's usually considered in the literature.

Neuromorphic Computing for Interactive Robotics: A Systematic Review

A systematic review of neuromorphic computing applications for socially interactive robotics and identifies the potential research topics for fully integrated socially interactive neuromorphic robots.

References

SHOWING 1-10 OF 57 REFERENCES

Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks

Spike-FlowNet is presented, a deep hybrid neural network architecture integrating SNNs and ANNs for efficiently estimating optical flow from sparse event camera outputs without sacrificing the performance.

End-to-End Learning of Representations for Asynchronous Event-Based Data

This work introduces a general framework to convert event streams into grid-based representations by means of strictly differentiable operations and lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones.

HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification

This paper introduces a novel event-based feature representation together with a new machine learning architecture that uses local memory units to efficiently leverage past temporal information and build a robust event- based representation and releases the first large real-world event- Based object classification dataset.

Learning to Detect Objects with a 1 Megapixel Event Camera

This work publicly releases the first high-resolution large-scale dataset for object detection and introduces a novel recurrent architecture for event-based detection and a temporal consistency loss for better-behaved training.

Events-To-Video: Bringing Modern Computer Vision to Event Cameras

This work proposes a novel, recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data, which surpasses state-of-the-art reconstruction methods by a large margin and opens the door to bringing the outstanding properties of event cameras to an entirely new range of tasks.

Event-Based Vision: A Survey

This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.

Unsupervised Learning of Spatio-Temporal Receptive Fields from an Event-Based Vision Sensor

This work presents a spiking neural network that learns spatio-temporal receptive receptive images in an unsupervised way from the output of a neuromorphic event-based vision sensor and develops biologically plausible spatiospecific receptive images when trained on real world input.

Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks

A novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation.

A Low Power, Fully Event-Based Gesture Recognition System

  • A. AmirBrian Taba D. Modha
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from

HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition

The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood and it is demonstrated that this concept can robustly be used at all stages of an event-based hierarchical model.
...