Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks

@inproceedings{Lee2020SpikeFlowNetEO,
  title={Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks},
  author={Chankyu Lee and Adarsh Kosta and Alex Zihao Zhu and Kenneth Chaney and Kostas Daniilidis and Kaushik Roy},
  booktitle={ECCV},
  year={2020}
}
Event-based cameras display great potential for a variety of tasks such as high-speed motion detection and navigation in low-light environments where conventional frame-based cameras suffer critically. This is attributed to their high temporal resolution, high dynamic range, and low-power consumption. However, conventional computer vision methods as well as deep Analog Neural Networks (ANNs) are not suited to work well with the asynchronous and discrete nature of event camera outputs. Spiking… 

Low-Latency and Scene-Robust Optical Flow Stream and Angular Velocity Estimation

This paper proposes an optical flow estimation algorithm with low latency and robustness to various scenes to utilize the advantage of the event camera by enhancing the existing optical flow algorithm, and estimates angular velocity withLow latency using the proposed optical flow stream.

3D-FlowNet: Event-based optical flow estimation with 3D representation

A 3D encoding representation for the event data that can better preserve the temporal distribution of the events compared to the 2D-encoding representation is presented and a novel network architecture, 3D-FlowNet, is proposed that outperforms state-of-the-art approaches with less training epoch.

SCFlow: Optical Flow Estimation for Spiking Camera

SCFlow is presented, a novel deep learning pipeline for optical flow estimation for spiking camera, and can predict optical flow from spike stream in different highspeed scenes, and express superiority to existing methods on the datasets.

Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks

This article focuses on the self-supervised learning problem of optical flow estimation from event-based camera inputs, and investigates the changes that are necessary to the state-of-the-art ANN training pipeline in order to successfully tackle it with SNNs.

Adaptive-SpikeNet: Event-based Optical Flow Estimation using Spiking Neural Networks with Learnable Neuronal Dynamics

This work proposes an adaptive fully-spiking framework with learnable neuronal dynamics to alleviate the spike vanishing problem, and utilizes surrogate gradient-based backpropagation through time (BPTT) to train the deep SNNs from scratch.

Secrets of Event-Based Optical Flow

A principled method to extend the Contrast Maximization framework to estimate optical estimation from events alone and ranks first among unsupervised methods on the MVSEC benchmark, and is competitive on the DSEC benchmark.

StereoSpike: Depth Learning with a Spiking Neural Network

This work proposes a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder, and demonstrates that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy.

SpikeMS: Deep Spiking Neural Network for Motion Segmentation

This paper proposes SpikeMS, the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation using the event-based DVS camera as input and introduces a novel spatio-temporal loss formulation that includes both spike counts and classification labels in conjunction with the use of new techniques for SNN backpropagation.

Data-Driven Technology in Event-Based Vision

The great prospects of event-based data-driven technology are revealed and a comprehensive overview of this field is presented, aiming at a more efficient and bio-inspired visual system to extract visual features from the external environment.

Fusion-FlowNet: Energy-Efficient Optical Flow Estimation using Sensor Fusion and Deep Fused Spiking-Analog Network Architectures

Fusion-FlowNet is a sensor fusion framework for energy -efficient optical flow estimation that generalizes well across distinct environments (rapid motion and challenging lighting conditions) and demonstrates state-of-the-art optical flow prediction on the Multi-Vehicle Stereo Event Camera dataset.

References

SHOWING 1-10 OF 38 REFERENCES

Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion

A novel framework for unsupervised learning for event cameras that learns motion information from only the event stream in the form of a discretized volume that maintains the temporal distribution of the events is proposed.

EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras

Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for

The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception

This letter presents a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments.

A million spiking-neuron integrated circuit with a scalable communication network and interface

Inspired by the brain’s structure, an efficient, scalable, and flexible non–von Neumann architecture is developed that leverages contemporary silicon technology and is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification.

Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures

This work proposes an approximate derivative method that accounts for the leaky behavior of LIF neurons that enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation and analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.

Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization

Novel algorithmic techniques of modifying the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations to improve the learning ability of the training methodologies to yield competitive accuracy, while, yielding large efficiency gains over their artificial counterparts.

Live Demonstration: Unsupervised Event-Based Learning of Optical Flow, Depth and Egomotion

A CNN is proposed which takes as input events from a DAVIS-346b event camera, represented as a discretized event volume, and predicts optical flow for each pixel in the image, due to the generalization abilities of the network.

Theoretical Neuroscience

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our

Event-based Plane-fitting Optical Flow for Dynamic Vision Sensors in FPGA

Modification and implementation of a well known "plane-fitting" approach to event-based optical flow estimation for Dynamic Vision Sensors is presented, and the FPGA implementation is shown to perform similarly to the previously published full precision software implementation.

Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception

This paper presents the first hierarchical spiking architecture in which motion (direction and speed) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera.