# Adversarial Feature Learning

@article{Donahue2017AdversarialFL, title={Adversarial Feature Learning}, author={Jeff Donahue and Philipp Kr{\"a}henb{\"u}hl and Trevor Darrell}, journal={ArXiv}, year={2017}, volume={abs/1605.09782} }

The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems… Expand

#### Figures, Tables, and Topics from this paper

#### 1,179 Citations

IVE-GAN: Invariant Encoding Generative Adversarial Networks

- Mathematics, Computer Science
- ArXiv
- 2017

Invariant Encoding Generative Adversarial Networks (IVE-GANs) is proposed, a novel GAN framework that introduces such a mapping for individual samples from the data by utilizing features in the data which are invariant to certain transformations. Expand

Generative Adversarial Networks: recent developments

- Computer Science, Mathematics
- ICAISC
- 2019

An overview of recent developments in GANs is presented with a focus on learning latent space representations, a subclass of generative models that are able to learn representations in an unsupervised and semi-supervised fashion. Expand

Pseudo Conditional Regularization for Inverse Mapping of GANs

- Computer Science
- IEEE Access
- 2020

A novel adversarial learning method, Pseudo Conditional Bidirectional GAN (PC-BiGAN), is proposed, specifically guided by the pseudo conditions defined by the proximity relationship among data in unsupervised learned feature space, for training the inverse mapping of GANs with a high degree of consistency and similarity-awareness. Expand

Inverting the Generator of a Generative Adversarial Network

- Computer Science, Medicine
- IEEE Transactions on Neural Networks and Learning Systems
- 2019

This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. Expand

Adversarially Learned Inference

- Computer Science, Mathematics
- ICLR
- 2017

The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks. Expand

High Quality Bidirectional Generative Adversarial Networks

- Computer Science
- ArXiv
- 2018

A new inference model is proposed that estimates the latent vector from the feature of GAN discriminator that can generate the high quality samples identical to those of unidirectional GANs and also reconstruct the original data faithfully. Expand

Conditional Autoencoders with Adversarial Information Factorization

- Computer Science, Mathematics
- ArXiv
- 2017

It is shown that factorizing the latent space to separate the information needed for reconstruction (a continuous space) from the information needs for image attribute classification (a discrete space), enables the capability to edit specific attributes of an image. Expand

Learning Inverse Mappings with Adversarial Criterion

- Computer Science, Mathematics
- ArXiv
- 2018

We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that… Expand

Sub-GAN: An Unsupervised Generative Model via Subspaces

- Computer Science
- ECCV
- 2018

A subspace-based generative adversarial network (Sub-GAN) is presented which simultaneously disentangles multiple latent subspaces and generates diverse samples correspondingly and can discover meaningful visual attributes which is hard to be annotated via strong supervision, e.g., the writing style of digits. Expand

Inferencing based on unsupervised learning of disentangled representations

- Computer Science
- ESANN
- 2018

This work proposes a framework that combines an encoder and a generator to learn disentangled representations which encode meaningful information about the data distribution without the need for any labels. Expand

#### References

SHOWING 1-10 OF 38 REFERENCES

Adversarially Learned Inference

- Computer Science, Mathematics
- ICLR
- 2017

The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks. Expand

Generative Adversarial Nets

- Computer Science
- NIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a… Expand

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

- Computer Science, Mathematics
- ICLR
- 2016

This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning. Expand

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

- Computer Science
- NIPS
- 2015

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. Expand

Context Encoders: Feature Learning by Inpainting

- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016

It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods. Expand

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

- Computer Science
- ICML
- 2014

DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms. Expand

Unsupervised Visual Representation Learning by Context Prediction

- Computer Science
- 2015 IEEE International Conference on Computer Vision (ICCV)
- 2015

It is demonstrated that the feature representation learned using this within-image context indeed captures visual similarity across images and allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Expand

A Fast Learning Algorithm for Deep Belief Nets

- Computer Science, Medicine
- Neural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand

Data-dependent Initializations of Convolutional Neural Networks

- Computer Science
- ICLR
- 2016

This work presents a fast and simple data-dependent initialization procedure, that sets the weights of a network such that all units in the network train at roughly the same rate, avoiding vanishing or exploding gradients. Expand

Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles

- Computer Science
- ECCV
- 2016

A novel unsupervised learning approach to build features suitable for object detection and classification and to facilitate the transfer of features to other tasks, the context-free network (CFN), a siamese-ennead convolutional neural network is introduced. Expand