• Corpus ID: 27174168

Distributional Adversarial Networks

@article{Li2018DistributionalAN,
  title={Distributional Adversarial Networks},
  author={Chengtao Li and David Alvarez-Melis and Keyulu Xu and Stefanie Jegelka and Suvrit Sra},
  journal={ArXiv},
  year={2018},
  volume={abs/1706.09549}
}
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination. [] Key Result The application of our framework to domain adaptation also results in considerable improvement over recent state-of-the-art.

Figures and Tables from this paper

Learning Generative Models across Incomparable Spaces
TLDR
A key component of this model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely that allows application to tasks in manifold learning, relational learning and cross-domain learning.
Generative Adversarial Autoencoder Networks
TLDR
This work proposes a new generator objective that finds it better to tackle mode collapse and applies an independent Autoencoders to constrain the generator and considers its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model.
Large Scale Many-Objective Optimization Driven by Distributional Adversarial Networks
TLDR
This paper proposes a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring and adopts a new two-stage strategy to update the position in order to significantly increase the search efficiency to find optimal solutions in huge decision space.
Re-purposing heterogeneous generative ensembles with evolutionary computation
TLDR
Two evolutionary algorithms are applied to create ensembles to re-purpose generative models, i.e., given a set of heterogeneous generators that were optimized for one objective, create ensemble of them for optimizing a different objective.
Spatial evolutionary generative adversarial networks
TLDR
A superior evolutionary GANs training method is contributed, Mustangs, that eliminates the single loss function used across Lipizzaner's grid and proposes to combine mutation and population approaches to diversity improvement.
Support Matching: A Novel Regularization to Escape from Mode Collapse in GANs
TLDR
Support Regularized-GAN (SR-GAN) is proposed to address the mode collapse issue of generative adversarial network by matching the support of the generated data distribution with that of the real data distribution.
Lipizzaner: A System That Scales Robust Generative Adversarial Network Training
TLDR
Lipizzaner is introduced, an open source software system that allows machine learning engineers to train GANs in a distributed and robust way that distributes a competitive coevolutionary algorithm which is robust to collapses.
Attacks and Defenses for Free-Riders in Multi-Discriminator GAN
TLDR
This paper conducts the first characterization study of the impact of free-riders on Multi-Discriminator (MD)-GAN, and proposes a defense strategy, termed DFG, to effectively defend against freeriders without affecting benign clients at a negligible computation overhead.
Dist-GAN: An Improved GAN Using Distance Constraints
TLDR
This system constrain the generator by an Autoencoder to consider the reconstructed samples from AE as “real” samples for the discriminator, effectively slowing down the convergence of discriminator and reducing gradient vanishing.
TripletGAN: Training Generative Model with Triplet Loss
TLDR
A new adversarial modeling method by substituting the classification loss of discriminator with triplet loss is proposed, andoretical proof demonstrates that such setting will help the generator converge to the given distribution theoretically under some conditions.
...
...

References

SHOWING 1-10 OF 43 REFERENCES
Mode Regularized Generative Adversarial Networks
TLDR
This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network}
Unrolled Generative Adversarial Networks
TLDR
This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Towards Principled Methods for Training Generative Adversarial Networks
TLDR
The goal of this paper is to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks, and performs targeted experiments to substantiate the theoretical analysis and verify assumptions, illustrate claims, and quantify the phenomena.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
TLDR
This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples.
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
Training generative neural networks via Maximum Mean Discrepancy optimization
TLDR
This work considers training a deep neural network to generate samples from an unknown distribution given i.i.d. data to frame learning as an optimization minimizing a two-sample test statistic, and proves bounds on the generalization error incurred by optimizing the empirical MMD.
...
...