Learning to Super Resolve Intensity Images From Events

@article{Isfahani2020LearningTS,
  title={Learning to Super Resolve Intensity Images From Events},
  author={Sayed Mohammad Mostafavi Isfahani and Jonghyun Choi and Kuk-Jin Yoon},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={2765-2773}
}
An event camera detects per-pixel intensity difference and produces asynchronous event stream with low latency, high dynamic range, and low power consumption. As a trade-off, the event camera has low spatial resolution. We propose an end-to-end network to reconstruct high resolution, high dynamic range (HDR) images directly from the event stream. We evaluate our algorithm on both simulated and real-world sequences and verify that it captures fine details of a scene and outperforms the… Expand
Event Enhanced High-Quality Image Recovery
TLDR
An explainable network, an event-enhanced sparse learning network (eSL-Net), to recover the high-quality images from event cameras and can largely improve the performance of the state-of-the-art by 7-12 dB. Expand
EFI-Net: Video Frame Interpolation from Fusion of Events and Frames
Event cameras are sensors with pixels that respond independently and asynchronously to changes in scene illumination. Event cameras have a number of advantages when compared to conventional cameras:Expand
Learning To Reconstruct High Speed and High Dynamic Range Videos From Events
Event cameras are novel sensors that capture the dynamics of a scene asynchronously. Such cameras record event streams with much shorter response latency than images captured by conventional cameras,Expand
Motion segmentation and tracking for integrating event cameras
TLDR
This paper presents a new scheme for event compression that has many analogues to traditional framed video compression techniques and introduces an application "in the loop" framework, where the application dynamically informs the camera how sensitive each pixel should be, based on the efficacy of the most recent data received. Expand
Quadtree Driven Lossy Event Compression
TLDR
This paper performs lossy event compression (LEC) based on a quadtree (QT) segmentation map derived from an adjacent image that provides a priority map for the 3D space-time volume, albeit in a 2D manner. Expand
Bridging the Gap between Events and Frames through Unsupervised Domain Adaptation
Event cameras are novel sensors with outstanding properties such as high temporal resolution and high dynamic range. Despite these characteristics, event-based vision has been held back by theExpand
EventZoom: Learning To Denoise and Super Resolve Neuromorphic Events
We address the problem of jointly denoising and super resolving neuromorphic events, a novel visual signal that represents thresholded temporal gradients in a space-time window. The challenge forExpand
Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream
The recently invented retina-inspired spike camera has shown great potential for capturing dynamic scenes. Different from the conventional digital cameras that compact the photoelectric informationExpand
Superevents: Towards Native Semantic Segmentation for Event-based Cameras
TLDR
A novel method is presented that employs lifetime augmentation for obtaining an event stream representation that is fed to a fully convolutional network to extract superevents, which are perceptually consistent local units that delineate parts of an object in a scene. Expand
Removing Blocking Artifacts in Video Streams Using Event Cameras
TLDR
EveRestNet is a convolutional neural network designed to remove blocking artifacts in video streams using events from neuromorphic sensors, and is able to improve the image quality. Expand
...
1
2
...

References

SHOWING 1-10 OF 29 REFERENCES
High Speed and High Dynamic Range Video with an Event Camera
TLDR
This work proposes a novel recurrent network to reconstruct videos from a stream of events, and trains it on a large amount of simulated event data, and shows that off-the-shelf computer vision algorithms can be applied to the reconstructions and that this strategy consistently outperforms algorithms that were specifically designed for event data. Expand
Continuous-time Intensity Estimation Using Event Cameras
TLDR
A computationally efficient, asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution, high-dynamic-range image state is proposed that outperforms existing state-of-the-art methods. Expand
Learning an Event Sequence Embedding for Dense Event-Based Deep Stereo
TLDR
A new module for event sequence embedding is introduced, which is the first learning-based stereo method for an event-based camera and the only method that produces dense results on the Multi Vehicle Stereo Event Camera Dataset (MVSEC). Expand
Simultaneous Optical Flow and Intensity Estimation from an Event Camera
TLDR
This work proposes, to the best of the knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene, within a sliding window time interval. Expand
Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation
TLDR
This work proposes a variational model that accurately models the behaviour of event cameras, enabling reconstruction of intensity images with arbitrary frame rate in real-time and verifies that solving the variations on the manifold produces high-quality images without explicitly estimating optical flow. Expand
Simultaneous Mosaicing and Tracking with an Event Camera
TLDR
This work shows for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Expand
ESIM: an Open Event Camera Simulator
TLDR
This work presents the first event camera simulator that can generate a large amount of reliable event data, and releases an open source implementation of the simulator, which is a theoretically sound, adaptive rendering scheme that only samples frames when necessary. Expand
Event-Based High Dynamic Range Image and Very High Frame Rate Video Generation Using Conditional Generative Adversarial Networks
TLDR
The potential of event camera-based conditional generative adversarial networks to create images/videos from an adjustable portion of the event data stream is unlocked and the results are evaluated by comparing the results with the intensity images captured on the same pixel grid-line of events. Expand
End-to-End Learning of Representations for Asynchronous Event-Based Data
TLDR
This work introduces a general framework to convert event streams into grid-based representations by means of strictly differentiable operations and lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones. Expand
The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception
TLDR
This letter presents a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments. Expand
...
1
2
3
...