Semi-Parametric Image Synthesis

@article{Qi2018SemiParametricIS,
  title={Semi-Parametric Image Synthesis},
  author={Xiaojuan Qi and Qifeng Chen and Jiaya Jia and Vladlen Koltun},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={8808-8816}
}
We present a semi-parametric approach to photographic image synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of image segments constructed from a training set of images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The synthesis is performed by a deep network… 

Figures and Tables from this paper

Semi-parametric Object Synthesis
We present a new semi-parametric approach to synthesize novel views of an object from a single monocular image. First, we exploit man-made object symmetry and piece-wise planarity to integrate rich
Image Synthesis via Semantic Composition
TLDR
A novel approach to synthesize realistic images based on their semantic layouts hypothesizes that for objects with similar appearance, they share similar representation, and proposes a dynamic weighted network constructed by spatially conditional computation.
Semi-parametric Image Inpainting
TLDR
A novel method of generating masks with irregular holes and present public dataset with such masks is proposed, which yields more realistic results than previous approaches, which is confirmed by the user study.
Semantic Image Synthesis With Spatially-Adaptive Normalization
TLDR
S spatially-adaptive normalization is proposed, a simple but effective layer for synthesizing photorealistic images given an input semantic layout that allows users to easily control the style and content of image synthesis results as well as create multi-modal results.
Shapes and Context: In-The-Wild Image Synthesis & Manipulation
TLDR
A data-driven model for interactively synthesizing in-the-wild images from semantic label input masks is introduced, significantly outperforming learning-based approaches on standard image synthesis metrics and can synthesize arbitrarily high-resolution images.
USIS: Unsupervised Semantic Image Synthesis
TLDR
This work proposes a new Unsupervised paradigm for Semantic Image Synthesis (USIS) and deploys a SPADE generator that learns to output images with visually separable semantic classes using a self-supervised segmentation loss, and proposes to use whole image wavelet-based discrimination.
RetrieveGAN: Image Synthesis via Differentiable Patch Retrieval
TLDR
This work aims to synthesize images from scene description with retrieved patches as reference with a differentiable retrieval module, which can make the entire pipeline end-to-end trainable, enabling the learning of better feature embedding for retrieval.
Semantic View Synthesis
TLDR
This work tackles a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
PasteGAN: A Semi-Parametric Method to Generate Image from Scene Graph
TLDR
This work proposes a semi-parametric method, PasteGAN, for generating the image from the scene graph and the image crops, where spatial arrangements of the objects and their pair-wise relationships are defined by the scene graphs and the object appearances are determined by the given object crops.
Learning Structure-Appearance Joint Embedding for Indoor Scene Image Synthesis
TLDR
This paper proposes a novel model based on a structure-appearance joint embedding learned from both images and wireframes that significantly outperforms existing state-of-the-art methods in both visual quality and structural integrity of generated images.
...
...

References

SHOWING 1-10 OF 40 REFERENCES
Photographic Image Synthesis with Cascaded Refinement Networks
  • Qifeng Chen, V. Koltun
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
It is shown that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective.
CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs
TLDR
This work introduces a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories and identifies corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm.
StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks
TLDR
This paper proposes Stacked Generative Adversarial Networks (StackGAN) to generate 256 photo-realistic images conditioned on text descriptions and introduces a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold.
Generative Adversarial Text to Image Synthesis
TLDR
A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels.
Learning a Discriminative Model for the Perception of Realism in Composite Images
TLDR
A Convolutional Neural Network model is trained that distinguishes natural photographs from automatically generated composite images, and outperforms previous works that rely on hand-crafted heuristics, for the task of classifying realistic vs. unrealistic photos.
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis
TLDR
This work proposes a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network.
Scene Collaging: Analysis and Synthesis of Natural Images with Semantic Layers
TLDR
This paper model a scene as a collage of warped, layered objects sampled from labeled, reference images, and exploits this representation for several applications: image editing, random scene synthesis, and image-to-anaglyph.
Deep Image Harmonization
TLDR
This work proposes an end-to-end deep convolutional neural network for image harmonization, which can capture both the context and semantic information of the composite images during harmonization and introduces an efficient way to collect large-scale and high-quality training data that can facilitate the training process.
Globally and locally consistent image completion
We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary
Scene completion using millions of photographs
TLDR
A new image completion algorithm powered by a huge database of photographs gathered from the Web, requiring no annotations or labelling by the user, that can generate a diverse set of results for each input image and allow users to select among them.
...
...