Blurry Video Frame Interpolation

@article{Shen2020BlurryVF,
  title={Blurry Video Frame Interpolation},
  author={Wang Shen and Wenbo Bao and Guangtao Zhai and Li Chen and Xiongkuo Min and Zhiyong Gao},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={5113-5122}
}
  • Wang Shen, Wenbo Bao, Zhiyong Gao
  • Published 27 February 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Existing works reduce motion blur and up-convert frame rate through two separate ways, including frame deblurring and frame interpolation. However, few studies have approached the joint video enhancement problem, namely synthesizing high-frame-rate clear results from low-frame-rate blurry inputs. In this paper, we propose a blurry video frame interpolation method to reduce motion blur and up-convert frame rate simultaneously. Specifically, we develop a pyramid module to cyclically synthesize… 

Figures and Tables from this paper

Motion-blurred Video Interpolation and Extrapolation

TLDR
A novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner is presented and a simple, yet effective flow-based rule is proposed to ensure temporal coherence across predicted frames and address potential temporal ambiguity.

Unifying Motion Deblurring and Frame Interpolation with Events

TLDR
A unified framework of event-based motion deblurring and frame interpolation for blurry video enhancement is presented, where the extremely low latency of events is leveraged to alleviate motion blur and facilitate intermediate frame prediction.

Multiple Video Frame Interpolation via Enhanced Deformable Separable Convolution

TLDR
A novel non-flow kernel-based approach that is referred to as enhanced deformable separable convolution (EDSC) to estimate not only adaptive kernels, but also offsets, masks and biases to make the network obtain information from non-local neighborhood.

ALANET: Adaptive Latent Attention Network for Joint Video Deblurring and Interpolation

TLDR
A novel architecture, Adaptive Latent Attention Network (ALANET), is introduced, which synthesizes sharp high frame-rate videos with no prior knowledge of input frames being blurry or not, thereby performing the task of both deblurring and interpolation.

Context-based video frame interpolation via depthwise over-parameterized convolution

TLDR
Experimental results demonstrate that the proposed context-based video frame interpolation method via depthwise over-parameterized convolution performs qualitatively and quantitatively better than state-of-the-art methods.

Video Frame Interpolation without Temporal Priors

TLDR
A general curvilinear motion trajectory formula is derived from four consecutive sharp frames or two consecutive blurry frames without temporal priors, which demonstrates that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations.

Beyond a Video Frame Interpolator: A Space Decoupled Learning Approach to Continuous Image Transition

TLDR
This work rethink the VFI problem and formulate it as a continuous image transition (CIT) task, whose key issue is to transition an image from one space to another space continuously, and proposes space decoupled learning (SDL) approach, which provides an effective framework to a variety of CIT problems beyond VFI.

DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting

In this paper, we propose a novel joint deblurring and multi-frame interpolation (DeMFI) framework, called DeMFI-Net, which accurately converts blurry videos of lower-frame-rate to sharp videos at

Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors

TLDR
A neural network trained in VFI tasks that clearly outperforms existing solutions is designed and proposed, and a metric for scene motion complexity that provides important insights into the performance of VFI methods at the test time is proposed.

Enhanced Deep Animation Video Interpolation

TLDR
This work presents AutoFI, a simple and effective method to automati-cally render training data for deep animation video interpolation, and proposes a plug-and-play sketch-based post-processing module, named SktFI, to help improve frame interpolation algorithms from nature video to animation video.

References

SHOWING 1-10 OF 41 REFERENCES

Depth-Aware Video Frame Interpolation

TLDR
A video frame interpolation method which explicitly detects the occlusion by exploring the depth information, and develops a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones.

Deep Video Frame Interpolation Using Cyclic Frame Generation

TLDR
A new loss term, the cycle consistency loss, which can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data is introduced.

Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation

TLDR
This work proposes an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled.

EDVR: Video Restoration With Enhanced Deformable Convolutional Networks

TLDR
This work proposes a novel Video Restoration framework with Enhanced Deformable convolutions, termed EDVR, and proposes a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration.

Learning to Extract Flawless Slow Motion From Blurry Videos

TLDR
A data-driven approach, where the training data is captured with a high frame rate camera and blurry images are simulated through an averaging process, which enables further increase in frame rate without retraining the network, by applying InterpNet recursively between pairs of sharp frames.

Spatio-Temporal Filter Adaptive Network for Video Deblurring

TLDR
The proposed Spatio-Temporal Filter Adaptive Network (STFAN) takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring.

Video Frame Interpolation via Adaptive Convolution

TLDR
This paper presents a robust video frame interpolation method that considers pixel synthesis for the interpolated frame as local convolution over two input frames and employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel.

Unsupervised Video Interpolation Using Cycle Consistency

TLDR
This work proposes unsupervised techniques to synthesize high frame rate videos directly from low frame rate films using cycle consistency, and introduces a pseudo supervised loss term that enforces the interpolated frames to be consistent with predictions of a pre-trained interpolation model.

Online Video Deblurring via Dynamic Temporal Blending Network

TLDR
An online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for realtime performance and introduces a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution.

Video Frame Synthesis Using Deep Voxel Flow

TLDR
This work addresses the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation), by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which is called deep voxel flow.