Corpus ID: 4747025

Graphical Generative Adversarial Networks

@inproceedings{Li2018GraphicalGA,
  title={Graphical Generative Adversarial Networks},
  author={Chongxuan Li and Max Welling and Jun Zhu and Bo Zhang},
  booktitle={NeurIPS},
  year={2018}
}
We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. [...] Key Method We propose two alternative divergence minimization approaches to learn the generative model and recognition model jointly. The first one treats all variables as a whole, while the second one utilizes the structural information by checking the individual local factors defined by the generative model and works better in practice. Finally, we present two important instances of Graphical-GAN, i.e…Expand
Latent Dirichlet Allocation in Generative Adversarial Networks
TLDR
This work proposes Latent Dirichlet Allocation based Generative Adversarial Networks (LDAGAN), which have high capacity of modeling complex image data and optimize the model by combing variational expectation-maximization (EM) algorithm with adversarial learning. Expand
Remote Sensing Image Synthesis via Graphical Generative Adversarial Networks
TLDR
The graphical generative adversarial networks (Graphical-GAN) scheme paves a promising way for remote sensing dataset augmentation, which is an effective means of improving the accuracy of learning models. Expand
Point Cloud GAN
TLDR
A two fold modification to GAN algorithm for learning to generate point clouds (PC-GAN), which combines ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process and defines a generic framework that can incorporate many existing GAN algorithms. Expand
Controllable Generative Adversarial Network
TLDR
ControlGAN is proposed, a controllable GAN (ControlGAN) structure that can generate intermediate features and opposite features for interpolated input and extrapolated input labels that are not used in the training process and implies that the ControlGAN can significantly contribute to the variety of generated samples. Expand
Multi-objects Generation with Amortized Structural Regularization
TLDR
This paper derives a lower bound of the regularized log-likelihood, which can be jointly optimized with respect to the generative model and recognition model efficiently and shows that ASR significantly outperforms the DGM baselines in terms of inference accuracy and sample quality. Expand
Improving Generative Moment Matching Networks with Distribution Partition
TLDR
This paper presents a new strategy to train GMMN with a low sample complexity while retaining the theoretical soundness, and presents an amortized network called GMMN-DP with shared auxiliary variable information for the data generation task, as well as developing an efficient stochastic training algorithm. Expand
Bayesian Adversarial Human Motion Synthesis
  • Rui Zhao, Hui Su, Q. Ji
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
By explicitly capturing the distribution of the data and parameters, this model has a more compact parameterization compared to GAN-based generative models and demonstrates the benefit of the fully probabilistic approach in data restoration task. Expand
Delving Into Classifying Hyperspectral Images via Graphical Adversarial Learning
  • Guangxing Wang, Peng Ren
  • Computer Science
  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • 2020
TLDR
A graphical adversarial learning (GAL) framework that explores the latent variable structure for generating diversified hyperspectral samples and demonstrates that the GAL is competitive with state-of-the-art methods which use global data for learning spectral reduction. Expand
On the Generative Utility of Cyclic Conditionals
TLDR
The CyGen framework for cyclic-conditional generative modeling is proposed, including methods to enforce compatibility and use the determined distribution to fit and generate data and is supported by experiments showing better generation and downstream classification performance. Expand
MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
TLDR
A scalable variant of the Expectation-Maximization algorithm for MiCE is developed and proof of the convergence is provided to solve the nontrivial inference and learning problems caused by the latent variables. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 50 REFERENCES
Bayesian GAN
TLDR
The Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles. Expand
Adversarial Feature Learning
TLDR
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning. Expand
Adversarially Learned Inference
TLDR
The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks. Expand
Variational Inference using Implicit Distributions
TLDR
This paper provides a unifying review of existing algorithms establishing connections between variational autoencoders, adversarially learned inference, operator VI, GAN-based image reconstruction, and more, and provides a framework for building new algorithms. Expand
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
Adversarial Message Passing For Graphical Models
TLDR
This work treats GANs as a basis for likelihood-free inference in generative models and generalizes them to Bayesian posterior inference over factor graphs, finding that Bayesian inference on structured models can be performed only with sampling and discrimination when using nonparametric variational families, without access to explicit distributions. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
TLDR
Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). Expand
Learning in Implicit Generative Models
TLDR
This work develops likelihood-free inference methods and highlight hypothesis testing as a principle for learning in implicit generative models, using which it is able to derive the objective function used by GANs, and many other related objectives. Expand
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
TLDR
Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods. Expand
...
1
2
3
4
5
...