Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

@article{Ledig2017PhotoRealisticSI,
  title={Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network},
  author={Christian Ledig and Lucas Theis and Ferenc Husz{\'a}r and Jose Caballero and Andrew P. Aitken and Alykhan Tejani and Johannes Totz and Zehan Wang and Wenzhe Shi},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={105-114}
}
  • C. LedigLucas Theis Wenzhe Shi
  • Published 15 September 2016
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors. [] Key Method In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public…

Figures and Tables from this paper

Sharp and Real Image Super-Resolution Using Generative Adversarial Network

A novel residual network architecture based on generative adversarial network (GAN) for image super-resolution (SR), which is capable of inferring photo-realistic images for 4\(\times \) upscaling factors, and demonstrates that the proposed approach performs better than previous methods.

Realistic single-image super-resolution using autoencoding adversarial networks

This work combines the benefits of some recent approaches and proposes a method based on autoencoding adversarial networks to reconstruct realistic natural images in SR with large up-sampling factors and shows outstanding performance in recovering fine texture details.

Single-image super-resolution reconstruction via generative adversarial network

This paper proposes an algorithm based on generative adversarial network for single-image super-resolution restoration at 4x upscaling factors and improves the image quality at the peak signal-tonoise ratio and structural similarity index.

Photo-Realistic Image Super-Resolution via Variational Autoencoders

This work proposes to perform Image Super-Resolution via Variational AutoEncoders (SR-VAE) learning according to the conditional distribution of the high- resolution images induced by the low-resolution images, and adds the conditional sampling mechanism to narrow down the latent subspace for reconstruction.

Image Super-Resolution with Adversarial Learning

This work addresses the problem of single-image super-resolution of degraded low-resolution images, where the downsampling and degradation models are unknown, by adopting Convolutional Neural Networks and implementing a cyclic structure in which the images are firstly denoised and deblurred, and then shifted to the desired scale.

SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution

A super resolution perceptual generative adversarial network (SRPGAN) framework for SISR tasks by proposing a robust perceptual loss based on the discriminator of the built SRPGAN model and combining it with the proposed perceptual loss and the adversarial loss.

Single Image Super-Resolution: Depthwise Separable Convolution Super-Resolution Generative Adversarial Network

A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters.

Residual Dense Generative Adversarial Network for Single Image Super-Resolution

This work proposes a super- resolution method for Residual Dense Generative Adversarial Networks (RDGAN), which fully exploit the hierarchical features from all the convolutional layers.

Better Visual Image Super-Resolution with Laplacian Pyramid of Generative Adversarial Networks

An Enhanced Laplacian Pyramid Generative Adversarial Network (ELSRGAN), based on the LaplACian pyramid to capture the high-frequency details of the image and has higher mean-sort-score (MSS) than any state-of-the-art method and has better visual perception.

Image Super-Resolution Reconstruction Based on a Generative Adversarial Network

This work employs a dual network structure in the generator network to solve the problem of insufficient feature extraction and introduces the Wasserstein distance into the discriminator network to enhance the discrimination ability and stability of the model.
...

References

SHOWING 1-10 OF 76 REFERENCES

Fast and accurate image upscaling with super-resolution forests

This paper shows the close relation of previous work on single image super-resolution to locally linear regression and demonstrates how random forests nicely fit into this framework, and proposes to directly map from low to high-resolution patches using random forests.

Image Super-Resolution Using Deep Convolutional Networks

We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.

Semantic Image Inpainting with Perceptual and Contextual Losses

A novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network that can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.

Super-Resolution with Deep Convolutional Sufficient Statistics

This paper proposes to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks, and the features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture.

Deep Networks for Image Super-Resolution with Sparse Prior

This paper shows that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end, and leads to much more efficient and effective training, as well as a reduced model size.

Multi-scale dictionary for single image super-resolution

This work introduces a multi-scale dictionary to a novel SR method that simultaneously integrates local and non-local priors and demonstrates that the proposed method can produce high quality SR recovery both quantitatively and perceptually.

Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network

This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.

Accelerating the Super-Resolution Convolutional Neural Network

This paper aims at accelerating the current SRCNN, and proposes a compact hourglass-shape CNN structure for faster and better SR, and presents the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance.

Deep multi-scale video prediction beyond mean square error

This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.
...