• Corpus ID: 57189205

Evaluating Generative Adversarial Networks on Explicitly Parameterized Distributions

  title={Evaluating Generative Adversarial Networks on Explicitly Parameterized Distributions},
  author={Shayne O'Brien and Matt Groh and Abhimanyu Dubey},
The true distribution parameterizations of commonly used image datasets are inaccessible. Rather than designing metrics for feature spaces with unknown characteristics, we propose to measure GAN performance by evaluating on explicitly parameterized, synthetic data distributions. As a case study, we examine the performance of 16 GAN variants on six multivariate distributions of varying dimensionalities and training set sizes. In this learning environment, we observe that: GANs exhibit similar… 

Pros and Cons of GAN Evaluation Measures: New Developments

  • A. Borji
  • Computer Science
    Comput. Vis. Image Underst.
  • 2022

Synthetic Data - A Privacy Mirage

It is found that, across the board, synthetic data provides little privacy gain even under a black-box adversary with access to a single synthetic dataset only, and the need to re-consider whether synthetic data is an appropriate strategy to privacy-preserving data publishing.



Quantitatively Evaluating GANs With Divergences Proposed for Training

This paper evaluates the performance of various types of GANs using divergence and distance functions typically used only for training, and compares the proposed metrics to human perceptual scores.

A Classification-Based Perspective on GAN Distributions

New techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data are proposed and indicate that GANs have significant problems in reproducing the more distributional properties ofThe training dataset.

An empirical study on evaluation metrics of generative adversarial networks

This paper comprehensively investigates existing sample-based evaluation metrics for GANs and observes that kernel Maximum Mean Discrepancy and the 1-Nearest-Neighbor (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space.

Are GANs Created Equal? A Large-Scale Study

A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes.

Improved Training of Wasserstein GANs

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)

Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.

Least Squares Generative Adversarial Networks

This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.