• Corpus ID: 27174168

@article{Li2018DistributionalAN,
author={Chengtao Li and David Alvarez-Melis and Keyulu Xu and Stefanie Jegelka and Suvrit Sra},
journal={ArXiv},
year={2018},
volume={abs/1706.09549}
}
• Published 29 June 2017
• Computer Science, Mathematics
• ArXiv
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination. [] Key Result The application of our framework to domain adaptation also results in considerable improvement over recent state-of-the-art.

## Figures and Tables from this paper

Learning Generative Models across Incomparable Spaces
• Computer Science
ICML
• 2019
A key component of this model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely that allows application to tasks in manifold learning, relational learning and cross-domain learning.
• Computer Science
ArXiv
• 2018
This work proposes a new generator objective that finds it better to tackle mode collapse and applies an independent Autoencoders to constrain the generator and considers its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model.
Large Scale Many-Objective Optimization Driven by Distributional Adversarial Networks
• Computer Science
ArXiv
• 2020
This paper proposes a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring and adopts a new two-stage strategy to update the position in order to significantly increase the search efficiency to find optimal solutions in huge decision space.
Re-purposing heterogeneous generative ensembles with evolutionary computation
• Computer Science
GECCO
• 2020
Two evolutionary algorithms are applied to create ensembles to re-purpose generative models, i.e., given a set of heterogeneous generators that were optimized for one objective, create ensemble of them for optimizing a different objective.
• Computer Science
GECCO
• 2019
A superior evolutionary GANs training method is contributed, Mustangs, that eliminates the single loss function used across Lipizzaner's grid and proposes to combine mutation and population approaches to diversity improvement.
Support Matching: A Novel Regularization to Escape from Mode Collapse in GANs
• Computer Science
ICONIP
• 2019
Support Regularized-GAN (SR-GAN) is proposed to address the mode collapse issue of generative adversarial network by matching the support of the generated data distribution with that of the real data distribution.
Lipizzaner: A System That Scales Robust Generative Adversarial Network Training
• Computer Science
ArXiv
• 2018
Lipizzaner is introduced, an open source software system that allows machine learning engineers to train GANs in a distributed and robust way that distributes a competitive coevolutionary algorithm which is robust to collapses.
• Computer Science
ArXiv
• 2022
This paper conducts the first characterization study of the impact of free-riders on Multi-Discriminator (MD)-GAN, and proposes a defense strategy, termed DFG, to effectively defend against freeriders without affecting benign clients at a negligible computation overhead.
Dist-GAN: An Improved GAN Using Distance Constraints
• Computer Science
ECCV
• 2018
This system constrain the generator by an Autoencoder to consider the reconstructed samples from AE as “real” samples for the discriminator, effectively slowing down the convergence of discriminator and reducing gradient vanishing.
TripletGAN: Training Generative Model with Triplet Loss
• Computer Science
ArXiv
• 2017
A new adversarial modeling method by substituting the classification loss of discriminator with triplet loss is proposed, andoretical proof demonstrates that such setting will help the generator converge to the given distribution theoretically under some conditions.

## References

SHOWING 1-10 OF 43 REFERENCES
• Computer Science
ICLR
• 2017
This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.
• Computer Science
ICLR
• 2017
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network}
• Computer Science
ICLR
• 2017
This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
• Computer Science
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Towards Principled Methods for Training Generative Adversarial Networks
• Computer Science
ICLR
• 2017
The goal of this paper is to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks, and performs targeted experiments to substantiate the theoretical analysis and verify assumptions, illustrate claims, and quantify the phenomena.
Improved Training of Wasserstein GANs
• Computer Science
NIPS
• 2017
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
• Computer Science
NIPS
• 2014
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
• Computer Science
ICLR
• 2017
This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples.