Discriminator Feature-Based Inference by Recycling the Discriminator of GANs

@article{Bang2020DiscriminatorFI,
  title={Discriminator Feature-Based Inference by Recycling the Discriminator of GANs},
  author={Duhyeon Bang and Seoungyoon Kang and Hyunjung Shim},
  journal={International Journal of Computer Vision},
  year={2020},
  pages={1-23}
}
Generative adversarial networks (GANs) successfully generate high quality data by learning a mapping from a latent vector to the data. Various studies assert that the latent space of a GAN is semantically meaningful and can be utilized for advanced data analysis and manipulation. To analyze the real data in the latent space of a GAN, it is necessary to build an inference mapping from the data to the latent vector. This paper proposes an effective algorithm to accurately infer the latent vector… Expand
1 Citations
An adversarial algorithm for variational inference with a new role for acetylcholine
TLDR
This work constructs a VI system that is both compatible with neurobiology and avoids the assumption that neural activities are independent given lower-layers during generation, and implements this algorithm, which can successfully train the approximate inference network for generative models. Expand

References

SHOWING 1-10 OF 63 REFERENCES
Adversarial Feature Learning
TLDR
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning. Expand
Improved Training of Generative Adversarial Networks Using Representative Features
TLDR
This paper achieves both aims simultaneously by improving the stability of training GANs by implicitly regularizing the discriminator using representative features based on the fact that standard GAN minimizes reverse Kullback-Leibler divergence. Expand
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Expand
Adversarially Learned Inference
TLDR
The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks. Expand
Adversarial Autoencoders
TLDR
This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets. Expand
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. Expand
Improving Generative Adversarial Networks with Denoising Feature Matching
We propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstractExpand
Autoencoding beyond pixels using a learned similarity metric
TLDR
An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic. Expand
Are GANs Created Equal? A Large-Scale Study
TLDR
A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. Expand
...
1
2
3
4
5
...