• Corpus ID: 207757623

Kernel-Guided Training of Implicit Generative Models with Stability Guarantees

@article{Mehrjou2019KernelGuidedTO,
  title={Kernel-Guided Training of Implicit Generative Models with Stability Guarantees},
  author={Arash Mehrjou and Wittawat Jitkrittum and Krikamol Muandet and Bernhard Scholkopf},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.14428}
}
Modern implicit generative models such as generative adversarial networks (GANs) are generally known to suffer from issues such as instability, uninterpretability, and difficulty in assessing their performance. If we see these implicit models as dynamical systems, some of these issues are caused by being unable to control their behavior in a meaningful way during the course of training. In this work, we propose a theoretically grounded method to guide the training trajectories of GANs by… 
Climate Adaptation: Reliably Predicting from Imbalanced Satellite Data
  • Ruchit RawalPrabhu Pradhan
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2020
TLDR
An overview on different kinds of techniques being used for handling such extreme settings and solutions aimed at maximizing performance on minority classes using a diverse set of methods which as a combination generalizes for all minority classes are presented.
Neural Lyapunov Redesign
TLDR
A two-player collaborative algorithm is proposed that alternates between estimating a Lyapunov function and deriving a controller that gradually enlarges the stability region of the closed-loop system to obtain control policies with large safe regions.
Automatic Policy Synthesis to Improve the Safety of Nonlinear Dynamical Systems
TLDR
A two-player collaborative algorithm is proposed that alternates between estimating a Lyapunov function and deriving a controller that gradually enlarges the stability region of the closed-loop system to obtain control policies with large safe regions.
Exact and Relaxed Convex Formulations for Shallow Neural Autoregressive Models
TLDR
An exact equivalence is proved between autoregressive neural models with one hidden layer and constrained, regularized logistic regression by using semi-infinite duality to embed the data matrix onto a higher dimensional space and introducing inequality constraints.

References

SHOWING 1-10 OF 40 REFERENCES
Gradient descent GAN optimization is locally stable
TLDR
This paper analyzes the "gradient descent" form of GAN optimization i.e., the natural setting where the authors simultaneously take small gradient steps in both generator and discriminator parameters, and proposes an additional regularization term for gradient descent GAN updates that is able to guarantee local stability for both the WGAN and the traditional GAN.
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.
On gradient regularizers for MMD GANs
TLDR
It is shown that controlling the gradient of the critic is vital to having a sensible loss function, and a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers is devised.
Towards Principled Methods for Training Generative Adversarial Networks
TLDR
The goal of this paper is to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks, and performs targeted experiments to substantiate the theoretical analysis and verify assumptions, illustrate claims, and quantify the phenomena.
Nonstationary GANs: Analysis as Nonautonomous Dynamical Systems
TLDR
This paper unifies the proposed methods for stabilizing training of GANs under the name nonautonomous GAN and investigates their dynamical behaviour when the data distribution is not stationary.
Generative Moment Matching Networks
TLDR
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples.
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
TLDR
This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples.
Tempered Adversarial Networks
TLDR
A simple modification is proposed that gives the generator control over the real samples which leads to a tempered learning process for both generator and discriminator, and can improve quality, stability and/or convergence speed across a range of different GAN architectures.
Stabilizing GAN Training with Multiple Random Projections
TLDR
This work proposes training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data to satisfy all discriminators simultaneously.
Training GANs with Optimism
TLDR
This work addresses the issue of limit cycling behavior in training Generative Adversarial Networks and proposes the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs and introduces a new algorithm, Optimistic Adam, which is an optimistic variant of Adam.
...
...