IEGAN: Multi-Purpose Perceptual Quality Image Enhancement Using Generative Adversarial Network

@article{Ghosh2019IEGANMP,
  title={IEGAN: Multi-Purpose Perceptual Quality Image Enhancement Using Generative Adversarial Network},
  author={Soumya Shubhra Ghosh and Yang Hua and Sankha Subhra Mukherjee and Neil Martin Robertson},
  journal={2019 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2019},
  pages={11-20}
}
  • S. Ghosh, Yang Hua, N. Robertson
  • Published 22 November 2018
  • Computer Science
  • 2019 IEEE Winter Conference on Applications of Computer Vision (WACV)
Despite the breakthroughs in quality of image enhancement, an end-to-end solution for simultaneous recovery of the finer texture details and sharpness for degraded images with low resolution is still unsolved. Some existing approaches focus on minimizing the pixel-wise reconstruction error which results in a high peak signal-to-noise ratio. The enhanced images fail to provide high-frequency details and are perceptually unsatisfying, i.e., they fail to match the quality expected in a photo… 

Figures and Tables from this paper

Improving Detection And Recognition Of Degraded Faces By Discriminative Feature Restoration Using GAN

TLDR
This paper presents an algorithm capable of recovering facial features from low-quality videos and images and contains an effective method involving metric learning and different loss function components operating on different parts of the generator.

Discriminative Similarity-Balanced Online Hashing for Supervised Image Retrieval

TLDR
A novel discriminative similarity-balanced online hashing (DSBOH) framework is proposed that performs better than several state-of-the-art online hashing methods in terms of effectiveness and efficiency.

References

SHOWING 1-10 OF 38 REFERENCES

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis

TLDR
This work proposes a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training to achieve a significant boost in image quality at high magnification ratios.

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.

Image-to-Image Translation with Conditional Adversarial Networks

TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Compression Artifacts Reduction by a Deep Convolutional Network

TLDR
A compact and efficient network for seamless attenuation of different compression artifacts is formulated and it is demonstrated that a deeper model can be effectively trained with the features learned in a shallow network.

Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index

TLDR
It is found that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality.

Deep Networks for Image Super-Resolution with Sparse Prior

TLDR
This paper shows that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end, and leads to much more efficient and effective training, as well as a reduced model size.

Super-Resolution with Deep Convolutional Sufficient Statistics

TLDR
This paper proposes to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks, and the features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture.

Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network

TLDR
This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.

A Haar wavelet-based perceptual similarity index for image quality assessment