MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement

@article{Bao2021MEMCNetME,
  title={MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement},
  author={Wenbo Bao and Wei-Sheng Lai and Xiaoyun Zhang and Zhiyong Gao and Ming-Hsuan Yang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  volume={43},
  pages={933-948}
}
Motion estimation (ME) and motion compensation (MC) have been widely used for classical video frame interpolation systems over the past decades. [...] Key Method A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly.Expand
Video frame interpolation using deep cascaded network structure
TLDR
This study exhaustively analyzes the advantages of both motion estimation schemes and proposes a cascaded system to maximize the disadvantages of both the schemes. Expand
Fine-Grained Motion Estimation for Video Frame Interpolation
TLDR
This article proposes a novel fine-grained motion estimation approach (FGME) for video frame interpolation that mainly contains two strategies: multi-scale coarse-to-fine optimization and multiple motion features estimation. Expand
Depth-Aware Video Frame Interpolation
TLDR
A video frame interpolation method which explicitly detects the occlusion by exploring the depth information, and develops a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. Expand
Channel Attention Is All You Need for Video Frame Interpolation
TLDR
A simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component, and achieves outstanding performance compared to the existing models with a component for optical flow computation. Expand
Residual Learning of Video Frame Interpolation Using Convolutional LSTM
TLDR
This paper proposes a method that directly generates the intermediate frame from two consecutive frames and takes the average of these two frames and utilizes residual learning to learn the difference between the average and the ground truth middle frame. Expand
Residual Learning of Video Frame Interpolation Using Convolutional LSTM
TLDR
The proposed method without explicit motion estimation can perform favorably against other state-of-the-art frame interpolation methods and utilizes residual learning to learn the difference between the average of these frames and the ground truth middle frame. Expand
Flow-aware synthesis: A generic motion model for video frame interpolation
TLDR
Qualitative and quantitative experimental results show that the generic adaptive flow prediction module presented can produce high-quality results and outperforms the existing state-of-the-art methods on popular public datasets. Expand
A Flexible Recurrent Residual Pyramid Network for Video Frame Interpolation
TLDR
Experimental results demonstrate that the RRPN is more flexible and efficient than current VFI networks but has fewer parameters, and shows superior performance for large motion cases. Expand
Blurry Video Frame Interpolation
TLDR
A pyramid module to cyclically synthesize clear intermediate frames to reduce motion blur and up-convert frame rate simultaneously is developed and an inter-pyramid recurrent module is proposed to connect sequential models to exploit the temporal relationship. Expand
AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation
TLDR
A new warping module named Adaptive Collaboration of Flows (AdaCoF), which estimates both kernel weights and offset vectors for each target pixel to synthesize the output frame and introduces dual-frame adversarial loss which is applicable only to video frame interpolation tasks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 63 REFERENCES
Video Frame Interpolation via Adaptive Convolution
TLDR
This paper presents a robust video frame interpolation method that considers pixel synthesis for the interpolated frame as local convolution over two input frames and employs a deep fully convolutional neural network to estimate a spatially-adaptive convolution kernel for each pixel. Expand
Motion-Compensated Frame Interpolation Using Bilateral Motion Estimation and Adaptive Overlapped Block Motion Compensation
TLDR
A new motion-compe (MC) interpolation algorithm to enhance the temporal resolution of video sequences and can overcome the limitations of the conventional OBMC, such as over-smoothing and poor de-blocking. Expand
Video Frame Interpolation via Adaptive Separable Convolution
TLDR
This paper develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously, which allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. Expand
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
TLDR
This work proposes an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. Expand
Overlapped block motion compensation: an estimation-theoretic approach
TLDR
This analysis establishes for the first time how (and why) OBMC can offer substantial reductions in prediction error as well, even with no change in the encoder's search and no extra side information. Expand
Video Frame Synthesis Using Deep Voxel Flow
TLDR
This work addresses the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation), by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which is called deep voxel flow. Expand
Motion-Compensated Frame Rate Up-Conversion—Part II: New Algorithms for Frame Interpolation
TLDR
Two new algorithms for unidirectional motion-compensated frame interpolation are presented: irregular-grid expanded-block weighted motion compensation (IEWMC) and block-wise directional hole interpolation (BDHI). Expand
Video Enhancement with Task-Oriented Flow
TLDR
T task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner, is proposed, which outperforms traditional optical flow on standard benchmarks as well as the Vimeo-90K dataset in three video processing tasks. Expand
A low complexity motion compensated frame interpolation method
TLDR
Experimental results show that the proposed algorithm outperforms other methods in both PSNR and visual performance, while its complexity is also lower than other methods. Expand
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. Expand
...
1
2
3
4
5
...