• Corpus ID: 28214862

A Quantitative Measure of Generative Adversarial Network Distributions

@inproceedings{Hendrycks2017AQM,
  title={A Quantitative Measure of Generative Adversarial Network Distributions},
  author={Dan Hendrycks and Steven Basart},
  year={2017}
}
We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the KullbackLeibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN’s whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherrypicked examples. We demonstrate the measure’s efficacy on the MNIST, SVHN, and CIFAR-10 datasets. 

Figures from this paper

A Classification-Based Study of Covariate Shift in GAN Distributions
A basic, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether they are truly able to capture all the fundamental characteristics of the
GAN Q UALITY I NDEX ( GQI ) BY GAN-INDUCED C LASSIFIER
TLDR
An objective measure, called GAN Quality Index (GQI), is proposed, to evaluate GANs, and the effectiveness of GQI is demonstrated on CIFAR-100, Flower-102, and MS-Celeb-1M which contains 10,000 classes.
eCommerceGAN : A Generative Adversarial Network for E-commerce
TLDR
The proposed approach ec^2GAN performs significantly better than the baseline in most of the scenarios, and several qualitative methods to evaluate ecGAN and demonstrate its effectiveness are proposed.
Generating Realistic Sequences of Customer-Level Transactions for Retail Datasets
TLDR
A method for generating realistic sequences of baskets that a given customer is likely to purchase over a period of time and is empirically shown to produce baskets that appear similar to real baskets and enjoy many common properties, including frequencies of different product types, brands, and prices.
Unsupervised Inference of Object Affordance from Text Corpora
TLDR
A method to mine for object-action pairs in free text corpora, successively training and evaluating different prediction models of affordance based on word embeddings is proposed.
Unsupervised Inference of Object Affordance from Text Corpora
TLDR
A method to mine for object-action pairs in free text corpora, successively training and evaluating different prediction models of affordance based on word embeddings is proposed.
Text-Based Inference of Object Affordances for Human-Robot Interaction
TLDR
A model to generate names of possible affordances for a named object by using a Conditional Variational Autoencoder as generative model and training it with sentences from a selected corpus is presented.

References

SHOWING 1-8 OF 8 REFERENCES
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
A note on the evaluation of generative models
TLDR
This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models
  • J. Hershey, P. Olsen
  • Computer Science
    2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07
  • 2007
TLDR
Two new methods, the variational approximation and the Variational upper bound, are introduced and compared to existing methods and the benefits of each one are considered and the performance of each is evaluated through numerical experiments.
Revisiting Classifier Two-Sample Tests
TLDR
The properties, performance, and uses of C2ST are established and their main theoretical properties are analyzed, and their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks, are proposed.
Generative adversarial networks
  • In International Conference on Learning Representations (ICLR),
  • 2014
Generative Adversarial Metric