HyNNA: Improved Performance for Neuromorphic Vision Sensor Based Surveillance using Hybrid Neural Network Architecture

  title={HyNNA: Improved Performance for Neuromorphic Vision Sensor Based Surveillance using Hybrid Neural Network Architecture},
  author={Deepak Singla and Soham Chatterjee and Lavanya Ramapantulu and Andr{\'e}s Ussa and Bharath Ramesh and Arindam Basu},
  journal={2020 IEEE International Symposium on Circuits and Systems (ISCAS)},
Applications in the Internet of Video Things (IoVT) domain have very tight constraints with respect to power and area. While neuromorphic vision sensors (NVS) may offer advantages over traditional imagers in this domain, the existing NVS systems either do not meet the power constraints or have not demonstrated end-to-end system performance. To address this, we improve on a recently proposed hybrid event-frame approach by using morphological image processing algorithms for region proposal and… 
A Hybrid Neuromorphic Object Tracking and Classification Framework for Real-time Systems
This paper proposes a real-time, hybrid neuromorphic framework for object tracking and classification using event-based cameras that possess properties such as low-power consumption (5-14 mW) and high dynamic range (120 dB) and uses a mixed frame and event approach to get energy savings with high performance.
A 51.3-TOPS/W, 134.4-GOPS In-Memory Binary Image Filtering in 65-nm CMOS
In-memory filtering (IMF), a 6T-SRAM in-memory computing (IMC)-based image denoising for event-based binary image (EBBI) frame from an NVS is presented and the accuracy of the images obtained provides comparable accuracy in tracking and classification applications compared with images obtained by conventional median filtering.
e-TLD: Event-Based Framework for Dynamic Object Tracking
This paper presents a long-term object tracking framework with a moving event camera under general tracking conditions. A first of its kind for these revolutionary cameras, the tracking framework
EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary Neuromorphic Vision Sensors
To the best of the knowledge, this is the first report where an NVS based solution is directly compared to other simultaneously recorded frame based method and shows tremendous promise.


A low-power end-to-end hybrid neuromorphic framework for surveillance applications
A low-power (5W) end-to-end neuromorphic framework for object tracking and classification using event-based cameras that possess desirable properties such as low power consumption (5-14 mW) and high dynamic range (120 dB) is proposed.
EBBIOT: A Low-complexity Tracking Algorithm for Surveillance in IoVT using Stationary Neuromorphic Vision Sensors
A mixed approach where event-based binary images (EBBI) that can use memory efficient noise filtering algorithms and exploit the motion triggering aspect of neuromorphic sensors to generate region proposals based on event density counts with >1000X less memory and computes compared to frame based approaches.
Event-based Vision: A Survey
This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
A Low Power, Fully Event-Based Gesture Recognition System
  • A. Amir, Brian Taba, +13 authors D. Modha
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification
This paper introduces a novel event-based feature representation together with a new machine learning architecture that uses local memory units to efficiently leverage past temporal information and build a robust event- based representation and releases the first large real-world event- Based object classification dataset.
Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas
A postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices is proposed.
Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor
This work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic "silicon-retina" image sensor, and the algorithm developed for
Live demonstration: Convolutional neural network driven by dynamic vision sensor playing RoShamBo
This demonstration presents a convolutional neural network (CNN) playing “RoShamBo” (“rock-paper-scissors”) against human opponents in real time. The network is driven by dynamic and active-pixel
Comparative Study on Connected Component Labeling Algorithms for Embedded Video Processing Systems
A detailed analysis of the most popular connected components labeling (CCL) algorithms for binary images investigates their usability for processing streaming data and suitability for implementation using Field-Programmable Gate Array (FPGA) devices.