Bringing a Blurry Frame Alive at High Frame-Rate With an Event Camera

@article{Pan2019BringingAB,
  title={Bringing a Blurry Frame Alive at High Frame-Rate With an Event Camera},
  author={Liyuan Pan and Cedric Scheerlinck and Xin Yu and Richard I. Hartley and Miaomiao Liu and Yuchao Dai},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019},
  pages={6813-6822}
}
Event-based cameras can measure intensity changes (called ‘events’) with microsecond accuracy under high-speed motion and challenging lighting conditions. With the active pixel sensor (APS), the event camera allows simultaneous output of the intensity frames. However, the output images are captured at a relatively low frame-rate and often suffer from motion blur. A blurry image can be regarded as the integral of a sequence of latent images, while the events indicate the changes between the… 

Bringing Blurry Alive at High Frame-Rate with an Event Camera

TLDR
A simple and effective approach to reconstruct a high-quality and high frame-rate shape video and achieves significant improvements in removing general blurs and reconstructing high temporal resolution video by optimizing the energy model.

High Frame Rate Video Reconstruction Based on an Event Camera

TLDR
This work proposes a simple yet effective approach to reconstruct high-quality and high frame rate sharp videos by associating event data to a latent sharp image and provides a new and more efficient solver to minimize the proposed energy model.

Single Image Optical Flow Estimation With an Event Camera

TLDR
Experimental results on both synthetic and real data (with blurred and non-blurred images) show the superiority of the model in comparison to state-of-the-art approaches.

EFI-Net: Video Frame Interpolation from Fusion of Events and Frames

TLDR
This work demonstrates that it is possible to combine the best of both worlds, by fusing a color frame stream at low temporal resolution and high spatial resolution with an event stream at high temporalresolution and low spatial resolution to generate a video stream with both high temporal and spatial resolutions while preserving the original color information.

Event-Based Vision: A Survey

TLDR
This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.

Time Lens: Event-based Video Frame Interpolation

TLDR
Time Lens is introduced, a novel method that leverages the advantages of both synthesis-based and flow-based approaches and shows an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame- based and event-based methods.

Robust Intensity Image Reconstruciton Based On Event Cameras

TLDR
This paper proposes a variational model by using spatial smooth constraint regularization to recover clean image frames from blurry and noisy camera images and events at any frame rate and presents experimental results to demonstrate that the proposed algorithm is superior to the other reconstruction algorithms.

Turning Frequency to Resolution: Video Super-resolution via Event Cameras

TLDR
An Event-based VSR framework (E-VSR), of which the key component is an asynchronous interpolation (EAI) module that reconstructs a high-frequency video stream with uniform and tiny pixel displacements between neighboring frames from an event stream, is proposed.

Learning Event-Driven Video Deblurring and Interpolation

TLDR
An effective event-driven video deblurring and interpolation algorithm based on deep convolutional neural networks (CNNs) that achieves superior performance against state-ofthe-art methods on both synthetic and real datasets is proposed.

CED: Color Event Camera Dataset

TLDR
This work presents and releases the first Color Event Camera Dataset (CED), containing 50 minutes of footage with both color frames and events, and presents an extension of the event camera simulator ESIM that enables simulation of color events.
...

References

SHOWING 1-10 OF 44 REFERENCES

High-speed video generation with an event camera

TLDR
This work demonstrates how to use the event camera to generate high-speed videos of 2D motion augmented with foreground and background images taken from a conventional camera, and builds a parametric model of affine motion to create image sequences.

Simultaneous Optical Flow and Intensity Estimation from an Event Camera

TLDR
This work proposes, to the best of the knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene, within a sliding window time interval.

Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation

TLDR
This work proposes a variational model that accurately models the behaviour of event cameras, enabling reconstruction of intensity images with arbitrary frame rate in real-time and verifies that solving the variations on the manifold produces high-quality images without explicitly estimating optical flow.

Simultaneous Mosaicing and Tracking with an Event Camera

TLDR
This work shows for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range.

Direct face detection and video reconstruction from event cameras

TLDR
This work proposes and develops a patch-based model for the event streams acquired from event cameras, and demonstrates the first direct face detection from event streams, highlighting the potential of these event-based cameras for power-efficient vision applications.

Photorealistic image reconstruction from hybrid intensity and event-based sensor

TLDR
A method is proposed to reconstruct photorealistic intensity images from a hybrid sensor consisting of a low-frame-rate conventional camera and the event sensor, which is more photorealism compared to any of the previous state-of-the-art algorithms.

Continuous-time Intensity Estimation Using Event Cameras

TLDR
A computationally efficient, asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution, high-dynamic-range image state is proposed that outperforms existing state-of-the-art methods.

Semi-Dense 3D Reconstruction with a Stereo Event Camera

TLDR
The proposed method consists of the optimization of an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes to improve the density of the reconstruction and to reduce the uncertainty of the estimation.

Learning to Extract a Video Sequence from a Single Motion-Blurred Image

TLDR
This work presents a deep learning scheme that gradually reconstructs a temporal ordering by sequentially extracting pairs of frames and introduces loss functions invariant to the temporal order, which lets a neural network choose during training what frame to output among the possible combinations.

Asynchronous Spatial Image Convolutions for Event Cameras

TLDR
A method to compute the convolution of a linear spatial kernel with the output of an event camera, providing an implementation of a Harris corner-response “state” that can be used in real time for feature detection and tracking on robotic systems.