Real-Time Super-Resolution System of 4K-Video Based on Deep Learning

  title={Real-Time Super-Resolution System of 4K-Video Based on Deep Learning},
  author={Yanpeng Cao and Chengcheng Wang and Changjun Song and Yongming Tang and He Li},
  journal={2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP)},
  • Yanpeng Cao, Chengcheng Wang, He Li
  • Published 1 July 2021
  • Computer Science
  • 2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP)
Video super-resolution (VSR) technology excels in reconstructing low-quality video, avoiding unpleasant blur effect caused by interpolation-based algorithms. However, vast computation complexity and memory occupation hampers the edge of deplorability and the runtime inference in real-life applications, especially for large-scale VSR task. This paper explores the possibility of real-time VSR system and designs an efficient and generic VSR network, termed EGVSR. The proposed EGVSR is based on… 

Figures and Tables from this paper

Accelerating the Training of Video Super-Resolution Models
This work shows that it is possible to gradually train video models from small to large spatial/temporal sizes, i.e .


Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation
A novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation is proposed.
FAST: A Framework to Accelerate Super-Resolution Processing on Compressed Videos
  • Zhengdong Zhang, V. Sze
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
FAST (Free Adaptive Super-resolution via Transfer), a framework to accelerate any SR algorithm applied to compressed videos, exploits the temporal correlation between adjacent frames such that SR is only applied to a subset of frames; SR pixels are then transferred to the other frames.
A Real-Time Convolutional Neural Network for Super-Resolution on FPGA With Applications to 4K UHD 60 fps Video Services
This paper is the first to implement a real-time CNN-based SR HW that upscales 2K full high-resolution video to 4K ultra high-definition video at 60 frames per second (fps) and proposes a compression method to efficiently store intermediate feature map data to reduce the number of line memories used in HW.
Accelerating the Super-Resolution Convolutional Neural Network
This paper aims at accelerating the current SRCNN, and proposes a compact hourglass-shape CNN structure for faster and better SR, and presents the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance.
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.
Temporally Coherent GANs for Video Super-Resolution (TecoGAN)
This work proposes an adversarial training for video super-resolution that leads to temporally coherent solutions without sacrificing spatial detail, and proposes a first set of metrics to evaluate the accuracy as well as the perceptual quality of the temporal evolution.
Video Super-Resolution via Deep Draft-Ensemble Learning
This work proposes a new direction for fast video super-resolution via a SR draft ensemble, which is defined as the set of high-resolution patch candidates before final image deconvolution, and combines SR drafts through the nonlinear process in a deep convolutional neural network (CNN).
Video Super-Resolution With Convolutional Neural Networks
This paper proposes a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution and shows that by using images to pretrain the model, a relatively small video database is sufficient for the training of the model to achieve and improve upon the current state-of-the-art.
Learning for Video Super-Resolution through HR Optical Flow Estimation
This paper proposes an end-to-end trainable video SR framework to super-resolve both images and optical flows and demonstrates that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance.
An Energy-Efficient FPGA-Based Deconvolutional Neural Networks Accelerator for Single Image Super-Resolution
A new methodology for optimizing the deconvolutional neural networks (DCNNs) used for increasing feature maps is proposed, and a novel method to optimize CNN dataflow is proposed so that the SR algorithm can be driven at low power in display applications.