ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution

@article{Vitoria2020ChromaGANAP,
  title={ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution},
  author={Patricia Vitoria and Lara Raad and Coloma Ballester},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2020},
  pages={2434-2443}
}
The colorization of grayscale images is an ill-posed problem, with multiple correct solutions. In this paper, we propose an adversarial learning colorization approach coupled with semantic information. A generative network is used to infer the chromaticity of a given grayscale image conditioned to semantic clues. This network is framed in an adversarial model that learns to colorize by incorporating perceptual and semantic understanding of color and class distributions. The model is trained via… 
Adversarial Edge-Aware Image Colorization With Semantic Segmentation
TLDR
A new adversarial edge-aware image colorization method with multitask output combined with semantic segmentation and adversarial loss that is superior to existing methods in terms of different quality metrics and achieves good results in imagecolorization.
SCGAN: Saliency Map-Guided Colorization With Generative Adversarial Network
TLDR
Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques and proposes a novel saliency map-based guidance method.
DDGAN: Double Discriminators GAN for Accurate Image Colorization
The purpose of image colorization is to map reasonable colors for grayscale images. Although more and more deep neural networks have been proposed and shown good performance, the color images
VCGAN: Video Colorization with Hybrid Generative Adversarial Network
TLDR
Experimental results demonstrate that VCGAN produces higherquality and temporally more consistent colorful videos than existing approaches and proposes a dense long-term loss that smooths the temporal disparity of every two remote frames.
Progressive Colorization via Iterative Generative Models
TLDR
A novel progressive automatic colorization via iterative generative models (iGM) that can produce satisfactory colorization in an unsupervised manner in multi-color spaces jointly and enforced with linearly autocorrelative constraint is developed.
Towards Vivid and Diverse Image Colorization with Generative Color Prior
TLDR
This work aims at recovering vivid colors by leveraging the rich and diverse color priors encapsulated in a pretrained Generative Adversarial Networks (GAN) via a GAN encoder and incorporating these features into the colorization process with feature modulations.
A REVIEW AND ANALYSIS OF THE EXISTING LITERATURE ON MONOCHROMATIC PHOTOGRAPHY COLORIZATION USING DEEP LEARNING
It is universally known that, through the process of colorization, one aims at converting a monochrome image into one of color, usually because it was taken by the limited technology of previous
Joint Intensity-Gradient Guided Generative Modeling for Colorization
TLDR
The joint intensity-gradient constraint in data-fidelity term is proposed to limit the degree of freedom within generative model at the iterative colorization stage, and it is conducive to edge-preserving.
ViT-Inception-GAN for Image Colourising
TLDR
This work attempts to colourise images using Vision Transformer Inception Generative Adversarial Network (ViT-I-GAN), which has an Inception-v3 fusion embedding in the generator.
GRAYSCALE IMAGE COLORIZATION USING A CONVOLUTIONAL NEURAL NETWORK
Image coloration refers to adding plausible colors to a grayscale image or video. Image coloration has been used in many modern fields, including restoring old photographs, as well as reducing the
...
1
2
3
4
...

References

SHOWING 1-10 OF 53 REFERENCES
Unsupervised Diverse Colorization via Generative Adversarial Networks
TLDR
A novel solution for unsupervised diverse colorization of grayscale images by leveraging conditional generative adversarial networks to model the distribution of real-world item colors, in which the model develops a fully convolutional generator with multi-layer noise to enhance diversity.
Image Colorization with Generative Adversarial Networks
TLDR
This work attempted to fully generalize this procedure using a conditional Deep Convolutional Generative Adversarial Network (DCGAN), trained over datasets that are publicly available such as CIFAR-10 and Places365.
Semantic Image Inpainting Through Improved Wasserstein Generative Adversarial Networks
TLDR
This work learns a data latent space by training an improved version of the Wasserstein generative adversarial network, for which it incorporates a new generator and discriminator architecture and combines it with a new optimization loss for inpainting that infers the missing content conditioned by the available data.
Colorful Image Colorization
TLDR
This paper proposes a fully automatic approach to colorization that produces vibrant and realistic colorizations and shows that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder.
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Deep exemplar-based colorization
TLDR
This work proposes the first deep learning approach for exemplar-based local colorization, which performs robustly and generalizes well even when using reference images that are unrelated to the input grayscale image.
Deep Colorization
TLDR
Inspired by the recent success in deep learning techniques which provide amazing modeling of large-scale data, this paper re-formulates the colorization problem so thatDeep learning techniques can be directly employed and a joint bilateral filtering based post-processing step is proposed to ensure artifact-free quality.
A Style-Based Generator Architecture for Generative Adversarial Networks
  • Tero Karras, S. Laine, Timo Aila
  • Computer Science, Mathematics
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
TLDR
This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
Learning Large-Scale Automatic Image Colorization
We describe an automated method for image colorization that learns to colorize from examples. Our method exploits a LEARCH framework to train a quadratic objective function in the chromaticity maps,
...
1
2
3
4
5
...