Scene-Adaptive Video Frame Interpolation via Meta-Learning
@article{Choi2020SceneAdaptiveVF, title={Scene-Adaptive Video Frame Interpolation via Meta-Learning}, author={Myungsub Choi and Janghoon Choi and Sungyong Baik and Tae Hyun Kim and Kyoung Mu Lee}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2020}, pages={9441-9450} }
Video frame interpolation is a challenging problem because there are different scenarios for each video depending on the variety of foreground and background motion, frame rate, and occlusion. It is therefore difficult for a single network with fixed parameters to generalize across different videos. Ideally, one could have a different network for each scenario, but this is computationally infeasible for practical applications. In this work, we propose to adapt the model to each video by making…
30 Citations
Test-Time Adaptation for Video Frame Interpolation via Meta-Learning
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
This work proposes MetaVFI, an adaptive video frame interpolation algorithm that uses additional information readily available at test time but has not been exploited in previous works to obtain significant performance gains with only a single gradient update without introducing any additional parameters.
Motion-Aware Dynamic Architecture for Efficient Frame Interpolation
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
This work proposes an effective framework that assigns varying amounts of computation for different regions of a video frame interpolation system, and demonstrates that the proposed framework can significantly reduce the computation cost (FLOPs) while maintaining the performance.
Multiple Video Frame Interpolation via Enhanced Deformable Separable Convolution
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
A novel non-flow kernel-based approach that is referred to as enhanced deformable separable convolution (EDSC) to estimate not only adaptive kernels, but also offsets, masks and biases to make the network obtain information from non-local neighborhood.
Revisiting Adaptive Convolutions for Video Frame Interpolation
- Computer Science2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2021
This work shows, somewhat surprisingly, that it is possible to achieve near state-of-the-art results with an older, simpler approach, namely adaptive separable convolutions, by a subtle set of low level improvements.
Training Weakly Supervised Video Frame Interpolation with Events
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
This work shows event-based frame interpolation can be trained without the need of high frame-rate videos via a novel weakly supervised framework that corrects image appearance by extracting complementary information from events and supplants motion dynamics modeling with attention mechanisms.
Splatting-based Synthesis for Video Frame Interpolation
- Computer ScienceArXiv
- 2022
A deep learning approach that solely relies on splatting to synthesize interpolated frames for video frame interpolation is proposed, which is not only much faster than similar approaches, especially for multi-frame interpolation, but can also yield new state-of-the-art results at high resolutions.
Beyond a Video Frame Interpolator: A Space Decoupled Learning Approach to Continuous Image Transition
- Computer ScienceArXiv
- 2022
This work rethink the VFI problem and formulate it as a continuous image transition (CIT) task, whose key issue is to transition an image from one space to another space continuously, and proposes space decoupled learning (SDL), a general-purpose solution to the CIT problem.
Context-based video frame interpolation via depthwise over-parameterized convolution
- Computer ScienceJ. Electronic Imaging
- 2021
Experimental results demonstrate that the proposed context-based video frame interpolation method via depthwise over-parameterized convolution performs qualitatively and quantitatively better than state-of-the-art methods.
Video Restoration Framework and Its Meta-adaptations to Data-Poor Conditions
- Computer ScienceECCV
- 2022
This work proposes a generic architecture that is effective for any weather condition due to the ability to extract robust feature maps without any domain-specific knowledge and shows comprehensive results on video de-hazing and de-raining datasets in addition to the meta-learning based adaptation results on night-time video restoration tasks.
Long-term Video Frame Interpolation via Feature Propagation
- Computer Science2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2022
A propagation network (PNet) is proposed by extending the classic feature-level forecasting with a novel motion-to-feature approach that can safely propagate from one side of the input up to a reliable time frame using the other input as a reference when there is a large gap between inputs.
References
SHOWING 1-10 OF 49 REFERENCES
PhaseNet for Video Frame Interpolation
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes a new approach, PhaseNet, that is designed to robustly handle challenging scenarios while also coping with larger motion, and shows that this is superior to the hand-crafted heuristics previously used in phase-based methods and compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets.
Channel Attention Is All You Need for Video Frame Interpolation
- Computer ScienceAAAI
- 2020
A simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component, and achieves outstanding performance compared to the existing models with a component for optical flow computation.
Video Frame Synthesis Using Deep Voxel Flow
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work addresses the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation), by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which is called deep voxel flow.
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled.
Video Frame Interpolation via Adaptive Convolution
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
This paper presents a robust video frame interpolation method that considers pixel synthesis for the interpolated frame as local convolution over two input frames and employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel.
Unsupervised Video Interpolation Using Cycle Consistency
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This work proposes unsupervised techniques to synthesize high frame rate videos directly from low frame rate films using cycle consistency, and introduces a pseudo supervised loss term that enforces the interpolated frames to be consistent with predictions of a pre-trained interpolation model.
Deep Video Frame Interpolation Using Cyclic Frame Generation
- Computer ScienceAAAI
- 2019
A new loss term, the cycle consistency loss, which can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data is introduced.
Quadratic video interpolation
- Computer ScienceNeurIPS
- 2019
This work proposes a quadratic video interpolation method which exploits the acceleration information in videos, allows prediction with curvilinear trajectory and variable velocity, and generates more accurate interpolation results.
Video Frame Interpolation via Adaptive Separable Convolution
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This paper develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously, which allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames.
Context-Aware Synthesis for Video Frame Interpolation
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame and outperforms representative state-of-the-art approaches.