Learning for Video Super-Resolution through HR Optical Flow Estimation

@article{Wang2018LearningFV,
  title={Learning for Video Super-Resolution through HR Optical Flow Estimation},
  author={Longguang Wang and Yulan Guo and Zaiping Lin and Xinpu Deng and Wei An},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.08573}
}
Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based… Expand
Deep Video Super-Resolution Using HR Optical Flow Estimation
TLDR
This paper proposes an end-to-end video SR network to super-resolve both optical flows and images and shows that the network achieves the state-of-the-art performance. Expand
Video Super-Resolution using Multi-scale Pyramid 3D Convolutional Networks
TLDR
The proposed MP3D model outperforms state-of-the-art video SR methods in terms of PSNR/SSIM values, visual quality and temporal consistency, respectively. Expand
FISR: Deep Joint Frame Interpolation and Super-Resolution with A Multi-scale Temporal Loss
TLDR
A novel training scheme with a multi-scale temporal loss that imposes temporal regularization on the input video sequence, which can be applied to any general video-related task is proposed and analyzed in depth with extensive experiments. Expand
Large Motion Video Super-Resolution with Dual Subnet and Multi-Stage Communicated Upsampling
TLDR
A novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion and a new module named U-shaped residual dense network with 3D convolution (U3D-RDN) for fine implicit motion estimation and motion compensation as well as coarse spatial feature extraction. Expand
Video Super-Resolution with Frame-Wise Dynamic Fusion and Self-Calibrated Deformable Alignment
  • Wenjie Xu, Huihui Song, Yutong Jin, Fei Yan
  • Computer Science
  • Neural Processing Letters
  • 2021
TLDR
A generic frame-wise dynamic fusion module (DFM) is proposed to fully aggregate temporal information into reference frame to flexibly fuse element-wise temporal information frame by frame. Expand
Video Super-Resolution Using Wave-Shape Network
TLDR
A novel architecture named Wave-shape network is proposed, designed to treat each frame as a separate source of information and fuse different temporal frames through a multi-scale structure to capture more complete structure and context information for HR image quality improvement. Expand
Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution
TLDR
A one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video and temporally interpolate LR frame features of the missing LR frames capturing local temporal contexts by a feature temporal interpolation module. Expand
Video Super-Resolution with Long-Term Self-Exemplars
TLDR
This work proposes a video super-resolution method with long-term cross-scale aggregation that leverages similar patches (self-exemplars) across distant frames that outperforms state-of-the-art methods. Expand
Multi-Stage Feature Fusion Network for Video Super-Resolution
TLDR
This paper proposes an end-to-end Multi-Stage Feature Fusion Network that fuses the temporally aligned features of the supporting frames and the spatial feature of the original reference frame at different stages of a feed-forward neural network architecture. Expand
Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
TLDR
A one-stage space-time video super-resolution framework is proposed, which directly synthesizes an HR slow-motion video from an LFR, LR video and proposes a deformable ConvLSTM to align and aggregate temporal information simultaneously for better leveraging global temporal contexts. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 44 REFERENCES
End-to-End Learning of Video Super-Resolution with Motion Compensation
TLDR
This paper provides an end-to-end video super-resolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture and shows that with this network configuration, videosuper-resolution can benefit from optical flow and is obtained state-of-the-art results on the popular test sets. Expand
Frame-Recurrent Video Super-Resolution
TLDR
This work proposes an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame and demonstrates that the proposed framework is able to significantly outperform the current state of the art. Expand
Learning Temporal Dynamics for Video Super-Resolution: A Deep Learning Approach
TLDR
A temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence is proposed and is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. Expand
Robust Video Super-Resolution with Learned Temporal Dynamics
TLDR
This work proposes a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency and reduces the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods. Expand
Video Super-Resolution via Deep Draft-Ensemble Learning
TLDR
This work proposes a new direction for fast video super-resolution via a SR draft ensemble, which is defined as the set of high-resolution patch candidates before final image deconvolution, and combines SR drafts through the nonlinear process in a deep convolutional neural network (CNN). Expand
Detail-Revealing Deep Video Super-Resolution
TLDR
This paper shows that proper frame alignment and motion compensation is crucial for achieving high quality results, and proposes a “sub-pixel motion compensation” (SPMC) layer in a CNN framework that can generate visually and quantitatively high-quality results, superior to current state-of-the-arts, without the need of parameter tuning. Expand
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
TLDR
This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. Expand
Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation
TLDR
A novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable is proposed. Expand
Video Super-Resolution With Convolutional Neural Networks
TLDR
This paper proposes a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution and shows that by using images to pretrain the model, a relatively small video database is sufficient for the training of the model to achieve and improve upon the current state-of-the-art. Expand
Residual Dense Network for Image Super-Resolution
TLDR
This paper proposes residual dense block (RDB) to extract abundant local features via dense connected convolutional layers and uses global feature fusion in RDB to jointly and adaptively learn global hierarchical features in a holistic way. Expand
...
1
2
3
4
5
...