Corpus ID: 229181044

FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation

@article{Kalluri2020FLAVRFV,
  title={FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation},
  author={Tarun Kalluri and Deepak Pathak and Manmohan Chandraker and Du Tran},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.08512}
}
A majority of approaches solve the problem of video frame interpolation by computing bidirectional optical flow between adjacent frames of a video followed by a suitable warping algorithm to generate the output frames. However, methods relying on optical flow often fail to model occlusions and complex non-linear motions directly from the video and introduce additional bottlenecks unsuitable for real time deployment. To overcome these limitations, we propose a flexible and efficient architecture… Expand
Efficient Space-time Video Super Resolution using Low-Resolution Flow and Mask Upsampling
TLDR
This paper explores an efficient solution for Spacetime Super-Resolution, aiming to generate High-resolution Slow-motion videos from Low Resolution and Low Frame rate videos, and uses a refinement network to improve the quality of HR intermediate frame via residual learning. Expand
NTIRE 2021 Challenge on Video Super-Resolution
TLDR
This paper presents evaluation results from two competition tracks as well as the proposed solutions to the NTIRE 2021 Challenge on Video Super-Resolution, and develops conventional video SR methods focusing on the restoration quality. Expand
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation
TLDR
A real-time intermediate flow estimation algorithm (RIFE) for video frame interpolation (VFI) that can be trained end-to-end and achieve excellent performance and achieves state-of-the-art index on several benchmarks is proposed. Expand

References

SHOWING 1-10 OF 79 REFERENCES
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
TLDR
This work proposes an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. Expand
Depth-Aware Video Frame Interpolation
TLDR
A video frame interpolation method which explicitly detects the occlusion by exploring the depth information, and develops a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. Expand
Channel Attention Is All You Need for Video Frame Interpolation
TLDR
A simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component, and achieves outstanding performance compared to the existing models with a component for optical flow computation. Expand
AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation
TLDR
A new warping module named Adaptive Collaboration of Flows (AdaCoF), which estimates both kernel weights and offset vectors for each target pixel to synthesize the output frame and introduces dual-frame adversarial loss which is applicable only to video frame interpolation tasks. Expand
PhaseNet for Video Frame Interpolation
TLDR
This work proposes a new approach, PhaseNet, that is designed to robustly handle challenging scenarios while also coping with larger motion, and shows that this is superior to the hand-crafted heuristics previously used in phase-based methods and compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets. Expand
Video Frame Synthesis Using Deep Voxel Flow
TLDR
This work addresses the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation), by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which is called deep voxel flow. Expand
Context-Aware Synthesis for Video Frame Interpolation
  • Simon Niklaus, Feng Liu
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame and outperforms representative state-of-the-art approaches. Expand
Scene-Adaptive Video Frame Interpolation via Meta-Learning
TLDR
This work shows the benefits of 'test-time adaptation' through simple fine-tuning of a network, then improves its efficiency by incorporating meta-learning, and obtains significant performance gains with only a single gradient update without any additional parameters. Expand
Video Frame Interpolation via Deformable Separable Convolution
TLDR
Experimental results demonstrate that the DSepConv method significantly outperforms the other kernel-based interpolation methods and shows strong performance on par or even better than the state-of-the-art algorithms both qualitatively and quantitatively. Expand
Deep Video Frame Interpolation Using Cyclic Frame Generation
TLDR
A new loss term, the cycle consistency loss, which can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data is introduced. Expand
...
1
2
3
4
5
...