An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

@article{CamuasMesa2012AnEM,
  title={An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors},
  author={Luis A. Camu{\~n}as-Mesa and Carlos Zamarre{\~n}o-Ramos and Alejandro Linares-Barranco and Antonio Acosta-Jim{\'e}nez and Teresa Serrano-Gotarredona and Bernab{\'e} Linares-Barranco},
  journal={IEEE Journal of Solid-State Circuits},
  year={2012},
  volume={47},
  pages={504-517}
}
Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we… 
Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed Forward ConvNets.
TLDR
This paper presents a methodology for mapping from a properly trained neural network in a conventional Frame-driven representation, to an Event- driven representation by studying Event-driven Convolutional Neural Networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols.
High-speed Motion Detection using Event-based Sensing
TLDR
A processing architecture for high-speed motion analysis, based on the processing of the SCD pixel stream has been developed and implemented into a Field Programmable Gate-Array (FPGA), and is small enough to be mounted on an autonomous system.
Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets
TLDR
This paper presents a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event- driven representation by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols.
Event-driven sensing and processing for high-speed robotic vision
TLDR
An event-driven sensor chip (called Dynamic Vision Sensor or DVS) together with event- driven convolution module arrays implemented on high-end FPGAs are used to create a new vision paradigm where sensors and processors use visual information not represented by sequences of frames.
Event-based Row-by-Row Multi-convolution engine for Dynamic-Vision Feature Extraction on FPGA
TLDR
This study presents an event-based convolution engine for FPGA that models an array of leaky integrate and fire neurons, able to apply different kernel sizes, from lxl to 7x7, and is able to process 64 feature maps row by row.
Fast Pipeline 128×128 pixel spiking convolution core for event-driven vision processing in FPGAs
TLDR
A digital implementation of a parallel and pipelined spiking convolutional neural network (S-ConvNet) core for processing spikes in an event-driven system and results in updating the state of a 128 neuron row in just 12ns.
Exploiting Lightweight Statistical Learning for Event-Based Vision Processing
This paper presents a lightweight statistical learning framework potentially suitable for low-cost event-based vision systems, where visual information is captured by a dynamic vision sensor (DVS)
A Selective Change Driven System for High-Speed Motion Analysis
TLDR
This system, built with the recently-developed 64 × 64 CMOS SCD sensor, shows the potential of the SCD approach when combined with a hardware processing system.
A signed pulse-train-based image processor-array for parallel kernel convolution in vision sensors
TLDR
The presented processor array can be used for high-speed kernel convolution image processing tasks including arbitrary size edge detection and sharpening functions, which require negative and fractional kernel values.
Asynchronous Neuromorphic Event-Driven Image Filtering
TLDR
A filtering methodology for asynchronously acquired gray-level data from an event-driven time-encoding imager shows that, based on the number of operations to be carried out, event- based processing outperforms frame-based processing in terms of computational cost.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
A 32$\,\times\,$ 32 Pixel Convolution Processor Chip for Address Event Vision Sensors With 155 ns Event Latency and 20 Meps Throughput
TLDR
This paper presents a 32 × 32 pixel 2-D convolution event processor whose kernel can have arbitrary shape and size up to32 × 32 and can be configured to discriminate between two simulated propeller-like shapes rotating simultaneously in the field of view at a speed as high as 9400 rps.
A 128 128 120 dB 15 s Latency Asynchronous Temporal Contrast Vision Sensor
TLDR
This silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements by providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output.
Large-Scale FPGA-based Convolutional Networks
TLDR
The majority of the feature extraction systems have a common structure composed of a filter bank, a nonlinear operation (quantization, winner-take-all, sparsification, normalization, and/or pointwise saturation), and finally a pooling operation (max, average, or histogramming).
Arbitrated Time-to-First Spike CMOS Image Sensor With On-Chip Histogram Equalization
This paper presents a time-to-first spike (TFS) and address event representation (AER)-based CMOS vision sensor performing image capture and on-chip histogram equalization (HE). The pixel values are
AER image filtering architecture for vision-processing systems
TLDR
The present paper proposes the architecture, provides a circuit implementation using MOS transistors operated in weak inversion, and shows behavioral simulation results at the system level operation and some electrical simulations.
A 3.6 $\mu$ s Latency Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor
TLDR
The ability of the sensor to capture very fast moving objects, rotating at 10 K revolutions per second, has been verified experimentally and a compact preamplification stage has been introduced that allows to improve the minimum detectable contrast over previous designs.
A Neuromorphic Cortical-Layer Microchip for Spike-Based Event Processing Vision Systems
TLDR
A neuromorphic cortical-layer processing microchip for address event representation (AER) spike-based processing systems that computes convolutions of programmable kernels over the AER visual input information flow and allows for a bio-inspired coincidence detection processing.
A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor
TLDR
This silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements by providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output.
A low-complexity image compression algorithm for Address-Event Representation (AER) PWM image sensors
TLDR
A low-complexity AER Block Compression (AERBC) algorithm which exploits the statistically ordered nature of AER pixel arrays and the address vector overhead can be dramatically reduced under this scheme.
CNP: An FPGA-based processor for Convolutional Networks
TLDR
The implementation exploits the inherent parallelism of ConvNets and takes full advantage of multiple hardware multiplyaccumulate units on the FPGA and can be used for low-power, lightweight embedded vision systems for micro-UAVs and other small robots.
...
1
2
3
4
5
...