Corpus ID: 1879195

Concept Formation and Dynamics of Repeated Inference in Deep Generative Models

@article{Nagano2017ConceptFA,
  title={Concept Formation and Dynamics of Repeated Inference in Deep Generative Models},
  author={Yoshihiro Nagano and Ryo Karakida and Masato Okada},
  journal={ArXiv},
  year={2017},
  volume={abs/1712.04195}
}
Deep generative models are reported to be useful in broad applications including image generation. Repeated inference between data space and latent space in these models can denoise cluttered images and improve the quality of inferred results. However, previous studies only qualitatively evaluated image outputs in data space, and the mechanism behind the inference has not been investigated. The purpose of the current study is to numerically analyze changes in activity patterns of neurons in the… Expand

References

SHOWING 1-10 OF 33 REFERENCES
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
VAE with a VampPrior
TLDR
This paper proposes to extend the variational auto-encoder (VAE) framework with a new type of prior called "Variational Mixture of Posteriors" prior, or VampPrior for short, which consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs. Expand
Improving Sampling from Generative Autoencoders with Markov Chains
TLDR
Using MCMC sampling, this work forms a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively decoding and encoding, which allows to sample from the learned latent distribution. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders
TLDR
A conditional variational autoencoder is proposed for predicting the dense trajectory of pixels in a scene—what will move in the scene, where it will travel, and how it will deform over the course of one second. Expand
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
TLDR
The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Expand
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference andExpand
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
TLDR
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. Expand
Generating Videos with Scene Dynamics
TLDR
A generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background is proposed that can generate tiny videos up to a second at full frame rate better than simple baselines. Expand
...
1
2
3
4
...