• Corpus ID: 195820291

Large Scale Adversarial Representation Learning

@inproceedings{Donahue2019LargeSA,
  title={Large Scale Adversarial Representation Learning},
  author={Jeff Donahue and Karen Simonyan},
  booktitle={NeurIPS},
  year={2019}
}
Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. [...] Key Method Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on…Expand
GALI: Generalized Adversarially Learned Inference.
TLDR
This work designs a non-saturating maximization objective for the generator-encoder pair and proves that the resulting adversarial game corresponds to a global optimum that simultaneously matches all the distributions.
InvGAN: Invertible GANs
TLDR
This work proposes a general framework that is agnostic to architecture and datasets and successfully embeds real images to the latent space of a high quality generative model, allowing it to perform image inpainting, merging, interpolation and online data augmentation.
InvGAN: Invertable GANs
TLDR
This work proposes a general framework that is agnostic to architecture and datasets and successfully embeds real images to the latent space of a high quality generative model, allowing it to perform image inpainting, merging, interpolation and online data augmentation.
GMM-Based Generative Adversarial Encoder Learning
TLDR
This paper presents a simple architectural setup that combines the generative capabilities of GAN with an encoder by combining the encoder with the discriminator using shared weights, then training them simultaneously using a new loss term.
Self-supervised Pre-training with Hard Examples Improves Visual Representations
TLDR
This paper proposes new data augmentation methods of generating training examples whose pseudo-labels are harder to predict than those generated via random image transformations, and proves that hard examples are instrumental in improving the generalization of the pre-trained models.
Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions
TLDR
The designed encoders have unified convolutional blocks and could match well in current GAN architectures (such as PGGAN, StyleGANs, and BigGAN) by fine-tuning the corresponding normalization layers and the last block.
Data-Efficient Instance Generation from Instance Discrimination
TLDR
This work proposes a data-efficient Instance Generation (InsGen) method based on instance discrimination that outperforms the state-of-the-art approach with 23.5% FID improvement on the setting of 2K training images from the FFHQ dataset.
Guided Generative Adversarial Neural Network for Representation Learning and Audio Generation Using Fewer Labelled Audio Data
TLDR
This paper proposes a novel GAN-based model that is named Guided Generative Adversarial Neural Network (GGAN), which can learn powerful representations and generate good-quality samples using a small amount of labelled data as guidance.
Effect of Input Noise Dimension in GANs
TLDR
It is shown that the right dimension of input noise for optimal results depends on the data-set and architecture used, and further theoretical analysis is needed for understanding the relationship between the low dimensional distribution and the generated images.
Contrastive Self-supervised Representation Learning Using Synthetic Data
TLDR
A contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability and achieves state-of-the-art performance on several visual recognition datasets is introduced.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
High-Fidelity Image Generation With Fewer Labels
TLDR
This work demonstrates how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting.
Self-Supervised GANs via Auxiliary Rotation Loss
TLDR
This work allows the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game, and takes a step towards bridging the gap between conditional and unconditional GANs.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Variational Approaches for Auto-Encoding Generative Adversarial Networks
TLDR
This paper develops a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model, and describes a unified objective for optimization.
A Style-Based Generator Architecture for Generative Adversarial Networks
  • Tero Karras, S. Laine, Timo Aila
  • Computer Science, Mathematics
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.
Adversarial Feature Learning
TLDR
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
Adversarial Autoencoders
TLDR
This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
It Takes (Only) Two: Adversarial Generator-Encoder Networks
We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted
...
1
2
3
4
5
...