Corpus ID: 219559283

Low Distortion Block-Resampling with Spatially Stochastic Networks

@article{Hong2020LowDB,
  title={Low Distortion Block-Resampling with Spatially Stochastic Networks},
  author={Sarah Jane Hong and Mart{\'i}n Arjovsky and Ian Thompson and Darryl Barnhart},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.05394}
}
We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent. This encompasses the typical situation found in generative modelling, where we are happy with parts of the generated data, but would like to resample others ("I like this generated castle overall, but this tower looks unrealistic, I would like a new one"). In order to… Expand

Figures and Tables from this paper

SMILE: Semantically-guided Multi-attribute Image and Layout Editing
TLDR
This paper successfully exploit a multimodal representation that handles all attributes, be it guided by random noise or exemplar images, while only using the underlying domain information of the target domain. Expand
StyleFusion: A Generative Model for Disentangling Spatial Segments
TLDR
StyleFusion is presented, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code, providing fine-grained control over each region of the generated image. Expand
Unpacking the Expressed Consequences of AI Research in Broader Impact Statements
TLDR
A qualitative thematic analysis of a sample of statements written for the NeurIPS 2020 conference identifies themes related to how consequences are expressed, areas of impacts expressed, and researchers' recommendations for mitigating negative consequences in the future. Expand

References

SHOWING 1-10 OF 38 REFERENCES
Invertibility of Convolutional Generative Networks from Partial Measurements
TLDR
It is rigorously proved that, under some mild technical assumptions, the input of a two-layer convolutional generative network can be deduced from the network output efficiently using simple gradient descent, implying that the mapping from the low- dimensional latent space to the high-dimensional image space is bijective. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
TLDR
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing. Expand
Seeing What a GAN Cannot Generate
TLDR
This work visualize mode collapse at both the distribution level and the instance level, and deploys a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set. Expand
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories. Expand
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
TLDR
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models. Expand
Toward Multimodal Image-to-Image Translation
TLDR
This work aims to model a distribution of possible outputs in a conditional generative modeling setting that helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse. Expand
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling. Expand
In-Domain GAN Inversion for Real Image Editing
TLDR
An in-domain GAN inversion approach, which not only faithfully reconstructs the input image but also ensures the inverted code to be semantically meaningful for editing, which achieves satisfying real image reconstruction and facilitates various image editing tasks, significantly outperforming start-of-the-arts. Expand
...
1
2
3
4
...