Exploring and Exploiting Hubness Priors for High-Quality GAN Latent Sampling

@inproceedings{Liang2022ExploringAE,
  title={Exploring and Exploiting Hubness Priors for High-Quality GAN Latent Sampling},
  author={Yuanbang Liang and Jing Wu and Yunyu Lai and Yipeng Qin},
  booktitle={ICML},
  year={2022}
}
Despite the extensive studies on Generative Adversarial Networks (GANs), how to reliably sample high-quality images from their latent spaces remains an under-explored topic. In this paper, we propose a novel GAN latent sampling method by exploring and exploiting the hubness priors of GAN latent distributions. Our key insight is that the high dimensionality of the GAN latent space will inevitably lead to the emergence of hub latents that usually have much larger sampling densities than other… 

References

SHOWING 1-10 OF 41 REFERENCES

Effectively Unbiased FID and Inception Score and Where to Find Them

This paper shows that two commonly used evaluation metrics for generative models, the Fréchet Inception Distance and the Inception Score, are biased, and extrapolated the score to obtain an effectively bias-free estimate of scores computed with an infinite number of samples, which are term FID Infinity and IS Infinity.

Improved Precision and Recall Metric for Assessing Generative Models

This work presents an evaluation metric that can separately and reliably measure both the quality and coverage of the samples produced by a generative model and the perceptual quality of individual samples, and extends it to study latent space interpolations.

A Style-Based Generator Architecture for Generative Adversarial Networks

An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.

Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data

This paper explores a new aspect of the dimensionality curse, referred to as hubness, that affects the distribution of k-occurrences: the number of times a point appears among the k nearest neighbors of other points in a data set, which becomes considerably skewed as dimensionality increases.

Analyzing and Improving the Image Quality of StyleGAN

This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

Progressive Growing of GANs for Improved Quality, Stability, and Variation

A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.

Alias-Free Generative Adversarial Networks

It is observed that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner, and small architectural changes are derived that guarantee that unwanted information cannot leak into the hierarchical synthesis process.

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

This work presents a novel super-resolution algorithm, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature, and outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.