Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets

@article{PrezCarrasco2013MappingFF,
  title={Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets},
  author={Jos{\'e} Antonio P{\'e}rez-Carrasco and Bo Zhao and Carmen Serrano and Bego{\~n}a Acha and Teresa Serrano-Gotarredona and Shoushun Chen and Bernab{\'e} Linares-Barranco},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2013},
  volume={35},
  pages={2706-2719}
}
Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called… 
Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output
TLDR
It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency.
End-to-End Learning of Representations for Asynchronous Event-Based Data
TLDR
This work introduces a general framework to convert event streams into grid-based representations by means of strictly differentiable operations and lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones.
A Spike Learning System for Event-driven Object Recognition
TLDR
A spiking learning system that uses the spiking neural network (SNN) with a novel temporal coding for accurate and fast object recognition that had state-of-the-art recognition accuracy while achieving remarkable time efficiency.
Event-driven sensing and processing for high-speed robotic vision
TLDR
An event-driven sensor chip (called Dynamic Vision Sensor or DVS) together with event- driven convolution module arrays implemented on high-end FPGAs are used to create a new vision paradigm where sensors and processors use visual information not represented by sequences of frames.
Event-Based Vision: A Survey
TLDR
This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
Event-based Vision: A Survey
TLDR
This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
A Motion-Based Feature for Event-Based Pattern Recognition
TLDR
An event-based luminance-free feature from the output ofynchronously generating “spiking” events that encode relative changes in pixels' illumination at high temporal resolutions is introduced.
Events-To-Video: Bringing Modern Computer Vision to Event Cameras
TLDR
This work proposes a novel, recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data, which surpasses state-of-the-art reconstruction methods by a large margin and opens the door to bringing the outstanding properties of event cameras to an entirely new range of tasks.
Asynchronous "Events" are Better For Motion Estimation
TLDR
This work presents the first neural asynchronous approach to process event stream for event-based camera through a novel deep neural network and asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames.
Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas
TLDR
A postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices is proposed.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 73 REFERENCES
An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors
TLDR
An Event-Driven Convolution Module for computing 2D convolutions on such event streams and has multi-kernel capability, which means it will select the convolution kernel depending on the origin of the event.
A 32$\,\times\,$ 32 Pixel Convolution Processor Chip for Address Event Vision Sensors With 155 ns Event Latency and 20 Meps Throughput
TLDR
This paper presents a 32 × 32 pixel 2-D convolution event processor whose kernel can have arbitrary shape and size up to32 × 32 and can be configured to discriminate between two simulated propeller-like shapes rotating simultaneously in the field of view at a speed as high as 9400 rps.
Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing
TLDR
A comparison study of the Frame-Based or Frame-Free Spiking ConvNet Convolution Processors and spike-based convolution processors, two neuro-inspired solutions for real-time visual processing.
A 128 128 120 dB 15 s Latency Asynchronous Temporal Contrast Vision Sensor
TLDR
This silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements by providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output.
A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor
TLDR
This silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements by providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output.
A 128$\,\times$ 128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers
TLDR
A novel pixel photo sensing and transimpedance pre-amplification stage makes it possible to improve by one order of magnitude contrast sensitivity and power, and reduce the best reported FPN (Fixed Pattern Noise) by a factor of 2, while maintaining the shortest reported latency and good Dynamic Range.
A 3.6 $\mu$ s Latency Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor
TLDR
The ability of the sensor to capture very fast moving objects, rotating at 10 K revolutions per second, has been verified experimentally and a compact preamplification stage has been introduced that allows to improve the minimum detectable contrast over previous designs.
A Neuromorphic Cortical-Layer Microchip for Spike-Based Event Processing Vision Systems
TLDR
A neuromorphic cortical-layer processing microchip for address event representation (AER) spike-based processing systems that computes convolutions of programmable kernels over the AER visual input information flow and allows for a bio-inspired coincidence detection processing.
A Five-Decade Dynamic-Range Ambient-Light-Independent Calibrated Signed-Spatial-Contrast AER Retina With 0.1-ms Latency and Optional Time-to-First-Spike Mode
TLDR
A spatial-contrast AER contrast retina with a signed output that shows much less mismatch, is almost insensitive to ambient light illumination, and biasing is much less critical than in the original voltage biasing scheme is presented.
Fast sensory motor control based on event-based hybrid neuromorphic-procedural system
TLDR
A hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal.
...
1
2
3
4
5
...