• Corpus ID: 238198645

StereoSpike: Depth Learning with a Spiking Neural Network

  title={StereoSpike: Depth Learning with a Spiking Neural Network},
  author={Ulysse Rançon and Javier Cuadrado-Anibarro and Benoit R. Cottereau and Timoth{\'e}e Masquelier},
Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here, we propose to solve it using StereoSpike , an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a modified U-Net-like encoder-decoder architecture. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to… 

Figures and Tables from this paper

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

This paper proposes a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron, and finds that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.

Spiking Neural Networks for Frame-based and Event-based Single Object Localization

This work proposes a spiking neural network approach for single object localization trained using surrogate gradient descent, for frame- and event-based sensors, and shows that this model has competitive/better performance in accuracy, robustness against various corruptions, and has lower energy consumption.

Uncertainty Guided Depth Fusion for Spike Camera

A novel Uncertainty-Guided Depth Fusion (UGDF) framework to fuse the predictions of monocular and stereo depth estimation networks for spike camera to achieve state-of-the-art results on CitySpike20K, surpassing all monocular or stereo spike depth estimation baselines.

Spiking neural networks for nonlinear regression

A framework for regression using spiking neural net- works is proposed, and it is shown that SNNs can accurately model materials that are stressed beyond reversibility, which is a challenging type of non-linearity.

MSS-DepthNet: Depth Prediction with Multi-Step Spiking Neural Network

This work proposes a spiking nerual network architecture with novel residual block designed and multi-dimension attention modules combined, focusing on the problem of depth prediction, and a novel event stream representation method is proposed specifically for SNNs.


An optimization-based ego-motion estimation framework that exploits the event-based optical flow outputs of a trained SNN model and a Hybrid RNN-ViT architecture for optical flow estimation which uses ViT to learn global context yielding better results than SoA.



Instantaneous Stereo Depth Estimation of Real-World Stimuli with a Neuromorphic Stereo-Vision Setup

This work uses the Dynamic Vision Sensor 3D Human Pose Dataset (DHP19) to validate a brain-inspired event-based stereo-matching architecture implemented on a mixed-signal neuromorphic processor with real-world data and shows that this SNN architecture is able to provide a coarse estimate of the input disparity instantaneously.

SpikeMS: Deep Spiking Neural Network for Motion Segmentation

This paper proposes SpikeMS, the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation using the event-based DVS camera as input and introduces a novel spatio-temporal loss formulation that includes both spike counts and classification labels in conjunction with the use of new techniques for SNN backpropagation.

Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks

Spike-FlowNet is presented, a deep hybrid neural network architecture integrating SNNs and ANNs for efficiently estimating optical flow from sparse event camera outputs without sacrificing the performance.

Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks

This article focuses on the self-supervised learning problem of optical flow estimation from event-based camera inputs, and investigates the changes that are necessary to the state-of-the-art ANN training pipeline in order to successfully tackle it with SNNs.

A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision

A low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices is presented.

Learning Monocular Dense Depth from Events

This work proposes a recurrent architecture to solve the problem of dense depth predictions in event cameras using a monocular setup and shows significant improvement over standard feed-forward methods.

Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection

This study investigates the performance degradation of SNNs in a more challenging regression problem (i.e., object detection), and introduces two novel methods: channel-wise normalization and signed neuron with imbalanced threshold, both of which provide fast and accurate information transmission for deep SNN's.

Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion

A novel framework for unsupervised learning for event cameras that learns motion information from only the event stream in the form of a discretized volume that maintains the temporal distribution of the events is proposed.

The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception

This letter presents a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments.

Event-Based Angular Velocity Regression with Spiking Networks

This work proposes, for the first time, a temporal regression problem of numerical values given events from an event-camera and investigates the prediction of the 3- DOF angular velocity of a rotating event- camera with an SNN.