• Corpus ID: 15876696

Energy-based Generative Adversarial Network

@article{Zhao2016EnergybasedGA,
  title={Energy-based Generative Adversarial Network},
  author={Junbo Jake Zhao and Micha{\"e}l Mathieu and Yann LeCun},
  journal={ArXiv},
  year={2016},
  volume={abs/1609.03126}
}
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. [] Key Method Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder…
Adversarial Fisher Vectors for Unsupervised Representation Learning
TLDR
This work examines Generative Adversarial Networks through the lens of deep Energy Based Models (EBMs), and proposes to evaluate both the generator and the discriminator by deriving corresponding Fisher Score and Fisher Information from the EBM.
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.
On Depth and Complexity of Generative Adversarial Networks
TLDR
While training tends to oscillate and not benefit from additional capacity of naively stacked layers, GANs are capable of generating samples with higher quality, specifically for images, samples of higher visual fidelity given proper regularization and careful balancing.
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems
TLDR
This work proposes a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators, and argues that xAI-GAN enables users greater control over how models learn than standard GANs.
Mixture Density Generative Adversarial Networks
TLDR
The ability to avoid mode collapse and discover all the modes and superior quality of the generated images (as measured by the Fréchet Inception Distance) are demonstrated, achieving the lowest FID compared to all baselines.
Unregularized Auto-Encoder with Generative Adversarial Networks for Image Generation
TLDR
A new Auto-Encoder Generative Adversarial Networks (AEGAN) is proposed, which takes advantages of both VAE and GAN and maps the random vector into the encoded latent space by adversarial training based on GAN.
Depth and Complexity of Deep Generative Adversarial Networks
TLDR
It is shown that GANs are capable of generating images of higher visual fidelity with proper regularization and simple techniques such as minibatch discrimination and an architecture similar to the standard GAN with residual blocks in the hidden layers consistently achieve higher inception scores than the standard model without noticeable mode collapse.
Linear Discriminant Generative Adversarial Networks
TLDR
A novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN), which provides a concrete metric of separation capacity for the discriminator.
IAN: Combining Generative Adversarial Networks for Imaginative Face Generation
TLDR
This work proposes a new regularizer for GAN based on K-nearest neighbor (K-NN) selective feature matching to a target set Y in high-level feature space, during the adversarial training of GAN on the base set X, and introduces a cascade of such GANs as the Imaginative Adversarial Network (IAN).
...
...

References

SHOWING 1-10 OF 36 REFERENCES
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Autoencoding beyond pixels using a learned similarity metric
TLDR
An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Deep Directed Generative Models with Energy-Based Probability Estimation
TLDR
This work proposes to train a deep directed generative model (not a Markov chain) so that its sampling distribution approximately matches the energy function that is being trained, Inspired by generative adversarial networks.
Generating images with recurrent adversarial networks
TLDR
This work proposes a recurrent generative model that can be trained using adversarial training to generate very good image samples, and proposes a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.
Stacked What-Where Auto-encoders
We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
...
...