Sampling Generative Networks

@article{White2016SamplingGN,
  title={Sampling Generative Networks},
  author={Tom White},
  journal={arXiv: Neural and Evolutionary Computing},
  year={2016}
}
  • Tom White
  • Published 14 September 2016
  • Computer Science, Mathematics
  • arXiv: Neural and Evolutionary Computing
We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model's prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors… 
Generalized Latent Variable Recovery for Generative Adversarial Networks
TLDR
This work extends Generator of a Generative Adversarial Network techniques to latent spaces with a Gaussian prior, and demonstrates the technique's effectiveness.
Optimal transport maps for distribution preserving operations on latent spaces of Generative Models
TLDR
This paper proposes to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation.
Non-Parametric Priors For Generative Adversarial Networks
TLDR
It is demonstrated that the designed prior helps improve image generation along any Euclidean straight line during interpolation, both qualitatively and quantitatively, without any additional training or architectural modifications.
Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions
TLDR
A neural network architecture based upon the Autoencoder and Generative Adversarial Network that promotes a convex latent distribution by training adversarially on latent space interpolations to preserve realistic resemblances to the network inputs is presented.
OPTIMAL TRANSPORT MAPS FOR DISTRIBUTION PRE- SERVING OPERATIONS ON LATENT SPACES OF GENER-
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or
Mixture Density Generative Adversarial Networks
TLDR
The ability to avoid mode collapse and discover all the modes and superior quality of the generated images (as measured by the Fréchet Inception Distance) are demonstrated, achieving the lowest FID compared to all baselines.
NeurInt : Learning to Interpolate through Neural ODEs
TLDR
This work proposes a novel generative model that learns a flexible non-parametric prior over interpolation trajectories, conditioned on a pair of source and target images, using Latent SecondOrder Neural Ordinary Differential Equations.
On Latent Distributions Without Finite Mean in Generative Models
TLDR
This work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors -- regions of latent space in close proximity to the origin of the space are sampled causing distribution mismatch, and it is shown that due to the Central Limit Theorem, this region is almost never sampled during the training process.
Generating In-Between Images Through Learned Latent Space Representation Using Variational Autoencoders
TLDR
It is demonstrated that the proposed method for image interpolation based on latent representations outperforms both pixel-based methods and a conventional variational autoencoder, with particular improvements in nonsuccessive images.
Evolutionary Latent Space Exploration of Generative Adversarial Networks
TLDR
This paper focuses on the generation of sets of diverse examples by searching in the latent space using Genetic Algorithms and Map Elites, and compares the implemented approaches with the traditional approach.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 16 REFERENCES
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Adversarially Learned Inference
TLDR
The adversarially learned inference (ALI) model is introduced, which jointly learns a generation network and an inference network using an adversarial process and the usefulness of the learned representations is confirmed by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.
Discriminative Regularization for Generative Models
TLDR
It is shown that enhancing the objective function of the variational autoencoder, a popular generative model, with a discriminative regularization term leads to samples that are clearer and have higher visual quality than the samples from the standard variatory autoencoders.
Autoencoding beyond pixels using a learned similarity metric
TLDR
An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
PANDA: Pose Aligned Networks for Deep Attribute Modeling
TLDR
A new method which combines part-based models and deep learning by training pose-normalized CNNs for inferring human attributes from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion is proposed.
Deep Visual Analogy-Making
TLDR
A novel deep network trained end-to-end to perform visual analogy making, which is the task of transforming a query image according to an example pair of related images, is developed.
Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration
Learning SURF Cascade for Fast and Accurate Object Detection
  • Jianguo Li, Yimin Zhang
  • Computer Science
    2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
TLDR
A novel learning framework for training boosting cascade based object detector from large scale dataset derived from the well-known Viola-Jones (VJ) framework that can train object detectors from billions of negative samples within one hour even on personal computers.
...
1
2
...