• Corpus ID: 52306237

Implementing Adaptive Separable Convolution for Video Frame Interpolation

@article{Kartasev2018ImplementingAS,
  title={Implementing Adaptive Separable Convolution for Video Frame Interpolation},
  author={Mart Kartasev and Carlo Rapisarda and Dominik Fay},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.07759}
}
As Deep Neural Networks are becoming more popular, much of the attention is being devoted to Computer Vision problems that used to be solved with more traditional approaches. Video frame interpolation is one of such challenges that has seen new research involving various techniques in deep learning. In this paper, we replicate the work of Niklaus et al. on Adaptive Separable Convolution, which claims high quality results on the video frame interpolation task. We apply the same network structure… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 17 REFERENCES
PhaseNet for Video Frame Interpolation
TLDR
This work proposes a new approach, PhaseNet, that is designed to robustly handle challenging scenarios while also coping with larger motion, and shows that this is superior to the hand-crafted heuristics previously used in phase-based methods and compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets.
Video Frame Synthesis Using Deep Voxel Flow
TLDR
This work addresses the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation), by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which is called deep voxel flow.
Video Frame Interpolation via Adaptive Convolution
TLDR
This paper presents a robust video frame interpolation method that considers pixel synthesis for the interpolated frame as local convolution over two input frames and employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel.
Video Frame Interpolation via Adaptive Separable Convolution
TLDR
This paper develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously, which allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames.
Frame Interpolation with Multi-Scale Deep Loss Functions and Generative Adversarial Networks
TLDR
A multi-scale generative adversarial network for frame interpolation (FIGAN) that is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses to improve the quality of synthesised intermediate video frames.
Learning Image Matching by Simply Watching Video
TLDR
An unsupervised learning based approach to the ubiquitous computer vision problem of image matching that achieves surprising performance comparable to traditional empirically designed methods.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
TLDR
This work proposes an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
The 2017 DAVIS Challenge on Video Object Segmentation
TLDR
The scope of the benchmark, the main characteristics of the dataset, the evaluation metrics of the competition, and a detailed analysis of the results of the participants to the challenge are described.
...
1
2
...