• Corpus ID: 219559283

Low Distortion Block-Resampling with Spatially Stochastic Networks

@article{Hong2020LowDB,
  title={Low Distortion Block-Resampling with Spatially Stochastic Networks},
  author={Sarah Jane Hong and Mart{\'i}n Arjovsky and Ian Thompson and Darryl Barnhart},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.05394}
}
We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent. This encompasses the typical situation found in generative modelling, where we are happy with parts of the generated data, but would like to resample others ("I like this generated castle overall, but this tower looks unrealistic, I would like a new one"). In order to… 

Figures and Tables from this paper

SMILE: Semantically-guided Multi-attribute Image and Layout Editing

This paper successfully exploit a multimodal representation that handles all attributes, be it guided by random noise or exemplar images, while only using the underlying domain information of the target domain.

StyleFusion: Disentangling Spatial Segments in StyleGAN-Generated Images

StyleFusion, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code, results in a single harmonized image in which each semantic region is controlled by one of the input latent codes.

StyleFusion: A Generative Model for Disentangling Spatial Segments

StyleFusion is presented, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code, providing fine-grained control over each region of the generated image.

Unpacking the Expressed Consequences of AI Research in Broader Impact Statements

A qualitative thematic analysis of a sample of statements written for the NeurIPS 2020 conference identifies themes related to how consequences are expressed, areas of impacts expressed, and researchers' recommendations for mitigating negative consequences in the future.

References

SHOWING 1-10 OF 39 REFERENCES

Invertibility of Convolutional Generative Networks from Partial Measurements

It is rigorously proved that, under some mild technical assumptions, the input of a two-layer convolutional generative network can be deduced from the network output efficiently using simple gradient descent, implying that the mapping from the low- dimensional latent space to the high-dimensional image space is bijective.

A Style-Based Generator Architecture for Generative Adversarial Networks

An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Seeing What a GAN Cannot Generate

This work visualize mode collapse at both the distribution level and the instance level, and deploys a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set.

LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop

This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories.

HoloGAN: Unsupervised Learning of 3D Representations From Natural Images

HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.

Toward Multimodal Image-to-Image Translation

This work aims to model a distribution of possible outputs in a conditional generative modeling setting that helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse.

Analyzing and Improving the Image Quality of StyleGAN

This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling.

GANSpace: Discovering Interpretable GAN Controls

This paper describes a simple technique to analyze Generative Adversarial Networks and create interpretable controls for image synthesis, and shows that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner.