Inclusive GAN: Improving Data and Minority Coverage in Generative Models

@article{Yu2020InclusiveGI,
  title={Inclusive GAN: Improving Data and Minority Coverage in Generative Models},
  author={Ning Yu and Ke Li and Peng Zhou and Jitendra Malik and Larry Davis and Mario Fritz},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.03355}
}
Generative Adversarial Networks (GANs) have brought about rapid progress towards generating photorealistic images. Yet the equitable allocation of their modeling capacity among subgroups has received less attention, which could lead to potential biases against underrepresented minorities if left uncontrolled. In this work, we first formalize the problem of minority inclusion as one of data coverage, and then propose to improve data coverage by harmonizing adversarial training with… Expand

Figures and Tables from this paper

Improving the Fairness of Deep Generative Models without Retraining
TLDR
This work proposes a simple yet effective method to improve the fairness of image generation for a pre-trained GAN model without retraining and learns a Gaussian Mixture Model to fit a distribution of the latent code set, which supports the sampling of latent codes for producing images with a more fair attribute distribution. Expand
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
TLDR
This paper presents the first taxonomy of membership inference attacks, encompassing not only existing attacks but also the novel ones, and proposes the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models. Expand
Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images
TLDR
A new interpretability method that can be used to understand the predictions of any black-box model on images, by showing how the input image would be modified in order to produce different predictions. Expand
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
TLDR
VAEBM is proposed, a symbiotic composition of a VAE and an EBM that offers the best of both worlds and outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. Expand
Formatting the Landscape: Spatial conditional GAN for varying population in satellite imagery
TLDR
A generative model framework for generating satellite imagery conditional on gridded population distributions is explored, suggesting the model captures population distributions accurately and delivers a controllable method to generate realistic satellite imagery. Expand
Responsible Disclosure of Generative Models Using Scalable Fingerprinting
TLDR
Experimental results show that the method fulfills key properties of a fingerprinting mechanism and achieves effectiveness in deep fake detection and attribution. Expand
Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis
TLDR
This work proposes a novel fake detection that is designed to resynthesize testing images and extract visual cues for detection, and adopts super-resolution, denoising and colorization as the re-synthesis. Expand
Copyright in Generative Deep Learning
TLDR
A set of key questions in the area of generative deep learning for the arts are considered, trying to define a set of guidelines for artists and developers working on deep learning generated art. Expand
Dual Contrastive Loss and Attention for GANs
TLDR
A novel dual contrastive loss is proposed and it is shown that, with this loss, discriminator learns more generalized and distinguishable representations to incentivize generation to further push the boundaries in image generation. Expand
Explainability Requires Interactivity
TLDR
An interactive framework to understand the highly complex decision boundaries of modern vision models and detects features such as skin tone, hair color, or the amount of makeup as strong influences in the classification of the network. Expand
...
1
2
...

References

SHOWING 1-10 OF 68 REFERENCES
PacGAN: The Power of Two Samples in Generative Adversarial Networks
TLDR
It is shown that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process, and numerical experiments suggests that packing provides significant improvements in practice as well. Expand
Non-Adversarial Image Synthesis With Generative Latent Nearest Neighbors
  • Yedid Hoshen, Jitendra Malik
  • Computer Science, Mathematics
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work presents a novel method - Generative Latent Nearest Neighbors (GLANN) - for training generative models without adversarial training that combines the strengths of IMLE and GLO in a way that overcomes the main drawbacks of each method. Expand
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. Expand
Diversity-Sensitive Conditional Generative Adversarial Networks
TLDR
It is shown that simple addition of the proposed regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task. Expand
Unrolled Generative Adversarial Networks
TLDR
This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator. Expand
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. Expand
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality. Expand
Improving Generative Adversarial Networks with Denoising Feature Matching
We propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstractExpand
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Expand
...
1
2
3
4
5
...