• Corpus ID: 233714870

COMISR: Compression-Informed Video Super-Resolution

  title={COMISR: Compression-Informed Video Super-Resolution},
  author={Yinxiao Li and Pengchong Jin and Feng Yang and Ce Liu and Ming-Hsuan Yang and Peyman Milanfar},
Most video super-resolution methods focus on restoring high-resolution video frames from low-resolution videos without taking into account compression. However, most videos on the web or mobile devices are compressed, and the compression can be severe when the bandwidth is limited. In this paper, we propose a new compressioninformed video super-resolution model to restore highresolution content without introducing artifacts caused by compression. The proposed model consists of three modules for… 


Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation
A novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation is proposed.
Non-Local ConvLSTM for Video Compression Artifact Reduction
An approximate non-local strategy is introduced in NL-ConvLSTM to capture global motion patterns and trace the spatiotemporal dependency in a video sequence to recover high-quality videos from low-quality compressed videos.
Recurrent Back-Projection Network for Video Super-Resolution
We proposed a novel architecture for the problem of video super-resolution. We integrate spatial and temporal contexts from continuous video frames using a recurrent encoder-decoder module, that
Frame-Recurrent Video Super-Resolution
This work proposes an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame and demonstrates that the proposed framework is able to significantly outperform the current state of the art.
Video Super-Resolution with Recurrent Structure-Detail Network
A novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current frame by dividing the input into structure and detail components which are fed to a recurrent unit composed of several proposed two-stream structure-detail blocks.
Spatio-Temporal Deformable Convolution for Compressed Video Quality Enhancement
This paper proposes a fast yet effective method for compressed video quality enhancement by incorporating a novel Spatio-Temporal Deformable Fusion (STDF) scheme to aggregate temporal information and achieves the state-of-the-art performance of compressed videoquality enhancement in terms of both accuracy and efficiency.
Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations
This study proposes a novel progressive fusion network for video SR, which is designed to make better use of spatio-temporal information and is proved to be more efficient and effective than the existing direct fusion, slow fusion or 3D convolution strategies.
Space-Time-Aware Multi-Resolution Video Enhancement
The components of the model that generate latent low- and high-resolution representations during ST-SR can be used to finetune a specialized mechanism for just spatial or just temporal super-resolution.
MuCAN: Multi-Correspondence Aggregation Network for Video Super-Resolution
This work proposes a temporal multi-correspondence aggregation strategy to leverage similar patches across frames, and a cross-scale nonlocal-cor correspondence aggregation scheme to explore self-similarity of images across scales.
Deep Non-Local Kalman Network for Video Compression Artifact Reduction
This work proposed a deep non-local Kalman network for compression artifact reduction and the video restoration is modeled as a Kalman filtering procedure and the decoded frames can be restored from the proposed deep Kalman model.