Corpus ID: 53617955

GDPP: Learning Diverse Generations Using Determinantal Point Process

@article{Elfeki2019GDPPLD,
  title={GDPP: Learning Diverse Generations Using Determinantal Point Process},
  author={Mohamed Elfeki and Camille Couprie and Morgane Rivi{\`e}re and Mohamed Elhoseiny},
  journal={ArXiv},
  year={2019},
  volume={abs/1812.00068}
}
Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic-looking images. An essential characteristic of generative models is their ability to produce multi-modal outputs. However, while training, they are often susceptible to mode collapse, that is models are limited in mapping input noise to only a few modes of the true data distribution. In this work, we draw inspiration from Determinantal Point Process (DPP… Expand
VARGAN: Variance Enforcing Network Enhanced GAN
TLDR
A new GAN architecture called variance enforcing GAN (VARGAN), which incorporates a third network to introduce diversity in the generated samples, which makes VARGAN a promising model to alleviate mode collapse. Expand
Adaptive Density Estimation for Generative Models
TLDR
This work shows that their model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models and improved likelihood scores. Expand
Deep Learning of Determinantal Point Processes via Proper Spectral Sub-gradient
TLDR
A simple but effective algorithm to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels and can be easily incorporated with multiple deep learning tasks. Expand
"Best-of-Many-Samples" Distribution Matching
TLDR
This work proposes a novel objective with a "Best-of-Many-Samples" reconstruction cost and a stable direct estimate of the synthetic likelihood that enables the hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybridVAE-GANS and plain GANs in mode coverage and quality. Expand
CESSES VIA PROPER SPECTRAL SUB-GRADIENT
Determinantal point processes (DPPs) is an effective tool to deliver diversity in multiple machine learning and computer vision tasks. Under the deep learning framework, DPP is typically optimizedExpand
NONSYMMETRIC DETERMINANTAL POINT PROCESSES
Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection. Recent work shows thatExpand
The Bures Metric for Taming Mode Collapse in Generative Adversarial Networks
TLDR
This work uses the last layer of the discriminator as a feature map to study the distribution of the real and the fake data and proposes to match the real batch diversity to the fake batch diversity by using the Bures distance between covariance matrices in feature space. Expand
DETERMINANTAL POINT PROCESSES
The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g.,Expand
Diverse Sample Generation: Pushing the Limit of Data-free Quantization
  • Haotong Qin, Yifu Ding, +4 authors Jiwen Lu
  • Computer Science
  • ArXiv
  • 2021
TLDR
A generic Diverse Sample Generation (DSG) scheme for the generative data-free post-training quantization and quantization-aware training, to mitigate the detrimental homogenization of the quantized network. Expand
Diverse Trajectory Forecasting with Determinantal Point Processes
TLDR
This work proposes to learn a diversity sampling function (DSF) that generates a diverse and likely set of future trajectories and demonstrates the diversity of the trajectories produced by the approach on both low-dimensional 2D trajectory data and high-dimensional human motion data. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 48 REFERENCES
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. Expand
PacGAN: The Power of Two Samples in Generative Adversarial Networks
TLDR
It is shown that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process, and numerical experiments suggests that packing provides significant improvements in practice as well. Expand
On GANs and GMMs
TLDR
This paper presents a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins, and shows that GMMs can generate realistic samples but also capture the full distribution, which GANs fail to do. Expand
Coverage and Quality Driven Training of Generative Image Models
TLDR
A model is proposed that extends varia-tional autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space, improving over commonly used factorial de-coders and achieving sample quality typical of adversarially trained networks. Expand
Learning Determinantal Point Processes
TLDR
This thesis shows how determinantal point processes can be used as probabilistic models for binary structured problems characterized by global, negative interactions, and demonstrates experimentally that the techniques introduced allow DPPs to be used for real-world tasks like document summarization, multiple human pose estimation, search diversification, and the threading of large document collections. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Multi-agent Diverse Generative Adversarial Networks
TLDR
MAD-GAN, an intuitive generalization to the Generative Adversarial Networks and its conditional variants to address the well known problem of mode collapse is proposed and its efficacy on the unsupervised feature representation task is shown. Expand
DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data
TLDR
The proposed DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data, and introduces a modified version of inception-score, a measure which has been found to correlate well with human assessment of generated samples. Expand
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Expand
Mode Regularized Generative Adversarial Networks
TLDR
This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem. Expand
...
1
2
3
4
5
...