ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

@inproceedings{Wang2018ESRGANES,
  title={ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks},
  author={Xintao Wang and Ke Yu and Shixiang Wu and Jinjin Gu and Yihao Liu and Chao Dong and Chen Change Loy and Yu Qiao and Xiaoou Tang},
  booktitle={ECCV Workshops},
  year={2018}
}
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the… Expand
Improved face super-resolution generative adversarial networks
TLDR
This paper decides to employ dense convolutional network blocks (dense blocks), which connect each layer to every other layer in a feed-forward fashion as the core of the very deep generator network G. Expand
Spatial Transformer Generative Adversarial Network for Robust Image Super-Resolution
TLDR
This paper proposes a novel robust super-resolution GAN (i.e. RSR-GAN) which can simultaneously perform both the geometric transformation and recovering the finer texture details and introduces an additional DCT loss term into the existing loss function. Expand
Advanced Generative Adversarial Network Based on Dense Connection For Single Image Super Resolution
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating more realistic texture in semantics and style during single image super-resolution.Expand
Hierarchical Generative Adversarial Networks for Single Image Super-Resolution
TLDR
This work proposes a hierarchical feature extraction module (HFEM) to extract the features in multiple scales, which helps concentrate on both local textures and global semantics, and introduces a hierarchical guided reconstruction module (HGRM) to reconstruct more natural structural textures in SR images via intermediate supervisions in a progressive manner. Expand
Learning Structral coherence Via Generative Adversarial Network for Single Image Super-Resolution
TLDR
Experimental results show that the proposed method outperforms state-of-the-art perceptual-driven SR methods in perception index (PI), and obtains more geometrically consistent and visually pleasing textures in natural image restoration. Expand
RBDN: Residual Bottleneck Dense Network for Image Super-Resolution
Recent studies have shown that Super-Resolution Generative Adversarial Network (SRGAN) can significantly improve the quality of single-image super-resolution. However, the existing SRGAN approachesExpand
Conditional generative adversarial network with densely-connected residual learning for single image super-resolution
TLDR
This paper uses the ground-truth high-resolution (HR) image as a useful guide to learn an effective conditional GAN (CGAN) for SISR, and designs the generator network via residual learning, which introduces dense connections to the residual blocks to effectively fuse low and high-level features across different layers. Expand
Image Super-Resolution Using Complex Dense Block on Generative Adversarial Networks
TLDR
This paper employs a generative adversarial network (GAN) and a new perceptual loss function for photo-realistic single image super-resolution (SISR) and proposes a new dense block which uses complex connections between each layer to build a more powerful generator. Expand
Perception-oriented Single Image Super-Resolution via Dual Relativistic Average Generative Adversarial Networks.
TLDR
Experimental results and ablation studies show that the proposed algorithm can rival state-of-the-art SR algorithms, both perceptually (PI-minimization) and objectively (PSNR-maximizing) with fewer parameters. Expand
SRVAE: super resolution using variational autoencoders
TLDR
A first of its kind SISR method that takes advantage of a selfevaluating Variational Autoencoder (IntroVAE) and judges the quality of generated high-resolution (HR) images with the target images in an adversarial manner, which allows for high perceptual image generation. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 49 REFERENCES
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss. Expand
Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks
TLDR
This work proposes a Cycle-in-Cycle network structure with generative adversarial networks (GAN) as the basic component to tackle the single image super-resolution problem in a more general case that the low-/high-resolution pairs and the down-sampling process are unavailable. Expand
Maintaining Natural Image Statistics with the Contextual Loss
TLDR
This paper looks explicitly at the distribution of features in an image and train the network to generate images with natural feature distributions, which reduces by orders of magnitude the number of images required for training and achieves state-of-the-art results on both single-image super-resolution, and high-resolution surface normal estimation. Expand
EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis
TLDR
This work proposes a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training to achieve a significant boost in image quality at high magnification ratios. Expand
Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform
TLDR
It is shown that it is possible to recover textures faithful to semantic classes in a single network conditioned on semantic segmentation probability maps through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. Expand
Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution
TLDR
This paper proposes the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images and generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Expand
Enhanced Deep Residual Networks for Single Image Super-Resolution
TLDR
This paper develops an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods, and proposes a new multi-scale deepsuper-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. Expand
Image Super-Resolution Using Deep Convolutional Networks
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deepExpand
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
TLDR
This work proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections, and proposes a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Expand
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. Expand
...
1
2
3
4
5
...