Enhanced Quadratic Video Interpolation

@inproceedings{Liu2020EnhancedQV,
  title={Enhanced Quadratic Video Interpolation},
  author={Yihao Liu and Liangbin Xie and Liu Siyao and Wenxiu Sun and Y. Qiao and Chao Dong},
  booktitle={ECCV Workshops},
  year={2020}
}
With the prosperity of digital video industry, video frame interpolation has arisen continuous attention in computer vision community and become a new upsurge in industry. Many learning-based methods have been proposed and achieved progressive results. Among them, a recent algorithm named quadratic video interpolation (QVI) achieves appealing performance. It exploits higher-order motion information (e.g. acceleration) and successfully models the estimation of interpolated flow. However, its… Expand

Figures and Tables from this paper

Revisiting Adaptive Convolutions for Video Frame Interpolation
TLDR
This work shows, somewhat surprisingly, that it is possible to achieve near state-of-the-art results with an older, simpler approach, namely adaptive separable convolutions, by a subtle set of low level improvements. Expand
AIM 2020 Challenge on Video Temporal Super-Resolution
TLDR
This paper reports the second AIM challenge on Video Temporal Super-Resolution (VTSR), a.k.a. frame interpolation, with a focus on the proposed solutions, results, and analysis, and proposes the enhanced quadratic video interpolation method. Expand
NTIRE 2021 Challenge on Video Super-Resolution
TLDR
This paper presents evaluation results from two competition tracks as well as the proposed solutions to the NTIRE 2021 Challenge on Video Super-Resolution, and develops conventional video SR methods focusing on the restoration quality. Expand
CDFI: Compression-Driven Network Design for Frame Interpolation
TLDR
This work proposes a compression-driven network design for frame interpolation (CDFI), that leverages model pruning through sparsityinducing optimization to significantly reduce the model size while achieving superior performance. Expand
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation
TLDR
A real-time intermediate flow estimation algorithm (RIFE) for video frame interpolation (VFI) that can be trained end-to-end and achieve excellent performance and achieves state-of-the-art index on several benchmarks is proposed. Expand

References

SHOWING 1-10 OF 25 REFERENCES
Quadratic video interpolation
TLDR
This work proposes a quadratic video interpolation method which exploits the acceleration information in videos, allows prediction with curvilinear trajectory and variable velocity, and generates more accurate interpolation results. Expand
Depth-Aware Video Frame Interpolation
TLDR
A video frame interpolation method which explicitly detects the occlusion by exploring the depth information, and develops a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. Expand
FeatureFlow: Robust Video Interpolation via Structure-to-Texture Generation
TLDR
This work devised a novel structure-to-texture generation framework which splits the video interpolation task into two stages: structure-guided interpolation and texture refinement, and is the first work that attempts to directly generate the intermediate frame through blending deep features. Expand
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
TLDR
This work proposes an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. Expand
Channel Attention Is All You Need for Video Frame Interpolation
TLDR
A simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component, and achieves outstanding performance compared to the existing models with a component for optical flow computation. Expand
AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation
TLDR
A new warping module named Adaptive Collaboration of Flows (AdaCoF), which estimates both kernel weights and offset vectors for each target pixel to synthesize the output frame and introduces dual-frame adversarial loss which is applicable only to video frame interpolation tasks. Expand
Video Frame Synthesis Using Deep Voxel Flow
TLDR
This work addresses the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation), by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which is called deep voxel flow. Expand
Video Frame Interpolation via Adaptive Convolution
TLDR
This paper presents a robust video frame interpolation method that considers pixel synthesis for the interpolated frame as local convolution over two input frames and employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel. Expand
PhaseNet for Video Frame Interpolation
TLDR
This work proposes a new approach, PhaseNet, that is designed to robustly handle challenging scenarios while also coping with larger motion, and shows that this is superior to the hand-crafted heuristics previously used in phase-based methods and compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets. Expand
Video Enhancement with Task-Oriented Flow
TLDR
T task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner, is proposed, which outperforms traditional optical flow on standard benchmarks as well as the Vimeo-90K dataset in three video processing tasks. Expand
...
1
2
3
...