ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

  title={ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks},
  author={Xintao Wang and Ke Yu and Shixiang Wu and Jinjin Gu and Yihao Liu and Chao Dong and Chen Change Loy and Yu Qiao and Xiaoou Tang},
  booktitle={ECCV Workshops},
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the… 
Improved face super-resolution generative adversarial networks
This paper decides to employ dense convolutional network blocks (dense blocks), which connect each layer to every other layer in a feed-forward fashion as the core of the very deep generator network G.
Generative Adversarial Network-Based Super-Resolution Considering Quantitative and Perceptual Quality
This paper mainly improves the Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) algorithm in the following aspects: adding a shallow network structure, adding the dual attention mechanism in the generator and the discriminator, and optimizing perceptual loss by adding second-order covariance normalization at the end of feature extractor.
Spatial Transformer Generative Adversarial Network for Robust Image Super-Resolution
This paper proposes a novel robust super-resolution GAN (i.e. RSR-GAN) which can simultaneously perform both the geometric transformation and recovering the finer texture details and introduces an additional DCT loss term into the existing loss function.
Advanced Generative Adversarial Network Based on Dense Connection For Single Image Super Resolution
  • Sheng Chen, Sumei Li, Chengcheng Zhu
  • Computer Science
  • 2019
The generating network of the model is based on dense residual structure, and the dense connection of residual-in-residual is used to implement fast and accurate learning of high frequency features of images.
Hierarchical Generative Adversarial Networks for Single Image Super-Resolution
This work proposes a hierarchical feature extraction module (HFEM) to extract the features in multiple scales, which helps concentrate on both local textures and global semantics, and introduces a hierarchical guided reconstruction module (HGRM) to reconstruct more natural structural textures in SR images via intermediate supervisions in a progressive manner.
Learning Structral coherence Via Generative Adversarial Network for Single Image Super-Resolution
Experimental results show that the proposed method outperforms state-of-the-art perceptual-driven SR methods in perception index (PI), and obtains more geometrically consistent and visually pleasing textures in natural image restoration.
ESRGAN+ : Further Improving Enhanced Super-Resolution Generative Adversarial Network
A network architecture with a novel basic block to replace the one used by the original ESRGAN is designed and noise inputs to the generator network are introduced in order to exploit stochastic variation.
RBDN: Residual Bottleneck Dense Network for Image Super-Resolution
To prove the superiority of the proposed RBDN model, a comprehensive and objective evaluation of the Peak Signal-to-Noise Ratio, structural similarity, learned perceptual image patch similarity, and other evaluation indicators obtained from the three test sets are conducted, i.e., Set5, Set14, and BSD100.
A-ESRGAN: Training Real-World Blind Super-Resolution with Attention U-Net Discriminators
This is the first work to introduce attention U-Net structure as the discriminator of GAN to solve blind SR problems and presents state-of-the-art level performance on the non-reference natural image quality evaluator (NIQE) metric.
Image Super-Resolution Using Complex Dense Block on Generative Adversarial Networks
This paper employs a generative adversarial network (GAN) and a new perceptual loss function for photo-realistic single image super-resolution (SISR) and proposes a new dense block which uses complex connections between each layer to build a more powerful generator.


Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.
Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks
This work proposes a Cycle-in-Cycle network structure with generative adversarial networks (GAN) as the basic component to tackle the single image super-resolution problem in a more general case that the low-/high-resolution pairs and the down-sampling process are unavailable.
Maintaining Natural Image Statistics with the Contextual Loss
This paper looks explicitly at the distribution of features in an image and train the network to generate images with natural feature distributions, which reduces by orders of magnitude the number of images required for training and achieves state-of-the-art results on both single-image super-resolution, and high-resolution surface normal estimation.
EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis
This work proposes a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training to achieve a significant boost in image quality at high magnification ratios.
Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform
It is shown that it is possible to recover textures faithful to semantic classes in a single network conditioned on semantic segmentation probability maps through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation.
Enhanced Deep Residual Networks for Single Image Super-Resolution
This paper develops an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods, and proposes a new multi-scale deepsuper-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model.
Image Super-Resolution Using Deep Convolutional Networks
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
This work proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections, and proposes a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels.
Spectral Normalization for Generative Adversarial Networks
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.
2018 PIRM Challenge on Perceptual Image Super-resolution
This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018, and concludes with an analysis of the current trends in perceptual SR, as reflected from the leading submissions.