• Publications
  • Influence
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Expand
  • 6,887
  • 1119
  • PDF
BEGAN: Boundary Equilibrium Generative Adversarial Networks
TLDR
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. Expand
  • 746
  • 132
  • PDF
Unrolled Generative Adversarial Networks
TLDR
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. Expand
  • 532
  • 72
  • PDF
Adversarial Spheres
TLDR
We prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size $O(1/\sqrt{d})$. Expand
  • 140
  • 17
  • PDF
Discrete Sequential Prediction of Continuous Actions for Deep RL
TLDR
In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces. Expand
  • 49
  • 5
  • PDF
Learning Unsupervised Learning Rules
TLDR
A major goal of unsupervised learning is to discover data representations that are useful for later tasks, without access to supervised labels during training. Expand
  • 44
  • 4
Guided evolutionary strategies: augmenting random search with surrogate gradients
TLDR
We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search. Expand
  • 23
  • 4
  • PDF
Meta-Learning Update Rules for Unsupervised Representation Learning
TLDR
A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Expand
  • 45
  • 2
  • PDF
Towards GAN Benchmarks Which Require Generalization
TLDR
We propose an evaluation metric based on neural network divergences to measure diversity, sample quality, and generalization in generative modeling. Expand
  • 17
  • 2
  • PDF
Guided evolutionary strategies: escaping the curse of dimensionality in random search
TLDR
We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient information (directions that may be correlated with, but not necessarily identical to, the true gradient) is available instead. Expand
  • 11
  • 2
  • PDF