Corpus ID: 84591

Adversarial Feature Learning

@article{Donahue2017AdversarialFL,
  title={Adversarial Feature Learning},
  author={Jeff Donahue and Philipp Kr{\"a}henb{\"u}hl and Trevor Darrell},
  journal={ArXiv},
  year={2017},
  volume={abs/1605.09782}
}
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems… Expand
IVE-GAN: Invariant Encoding Generative Adversarial Networks
TLDR
Invariant Encoding Generative Adversarial Networks (IVE-GANs) is proposed, a novel GAN framework that introduces such a mapping for individual samples from the data by utilizing features in the data which are invariant to certain transformations. Expand
Generative Adversarial Networks: recent developments
TLDR
An overview of recent developments in GANs is presented with a focus on learning latent space representations, a subclass of generative models that are able to learn representations in an unsupervised and semi-supervised fashion. Expand
Pseudo Conditional Regularization for Inverse Mapping of GANs
TLDR
A novel adversarial learning method, Pseudo Conditional Bidirectional GAN (PC-BiGAN), is proposed, specifically guided by the pseudo conditions defined by the proximity relationship among data in unsupervised learned feature space, for training the inverse mapping of GANs with a high degree of consistency and similarity-awareness. Expand
Inverting the Generator of a Generative Adversarial Network
TLDR
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. Expand
Adversarially Learned Inference
TLDR
The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks. Expand
High Quality Bidirectional Generative Adversarial Networks
TLDR
A new inference model is proposed that estimates the latent vector from the feature of GAN discriminator that can generate the high quality samples identical to those of unidirectional GANs and also reconstruct the original data faithfully. Expand
Conditional Autoencoders with Adversarial Information Factorization
TLDR
It is shown that factorizing the latent space to separate the information needed for reconstruction (a continuous space) from the information needs for image attribute classification (a discrete space), enables the capability to edit specific attributes of an image. Expand
Learning Inverse Mappings with Adversarial Criterion
We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E thatExpand
Sub-GAN: An Unsupervised Generative Model via Subspaces
TLDR
A subspace-based generative adversarial network (Sub-GAN) is presented which simultaneously disentangles multiple latent subspaces and generates diverse samples correspondingly and can discover meaningful visual attributes which is hard to be annotated via strong supervision, e.g., the writing style of digits. Expand
Inferencing based on unsupervised learning of disentangled representations
TLDR
This work proposes a framework that combines an encoder and a generator to learn disentangled representations which encode meaningful information about the data distribution without the need for any labels. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
Adversarially Learned Inference
TLDR
The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning. Expand
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. Expand
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods. Expand
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms. Expand
Unsupervised Visual Representation Learning by Context Prediction
TLDR
It is demonstrated that the feature representation learned using this within-image context indeed captures visual similarity across images and allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Expand
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
Data-dependent Initializations of Convolutional Neural Networks
TLDR
This work presents a fast and simple data-dependent initialization procedure, that sets the weights of a network such that all units in the network train at roughly the same rate, avoiding vanishing or exploding gradients. Expand
Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
TLDR
A novel unsupervised learning approach to build features suitable for object detection and classification and to facilitate the transfer of features to other tasks, the context-free network (CFN), a siamese-ennead convolutional neural network is introduced. Expand
...
1
2
3
4
...