Dual Pyramid Generative Adversarial Networks for Semantic Image Synthesis

@article{Li2022DualPG,
  title={Dual Pyramid Generative Adversarial Networks for Semantic Image Synthesis},
  author={Shi-Jie Li and Ming-Ming Cheng and Juergen Gall},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.04085}
}
The goal of semantic image synthesis is to generate photo-realistic images from semantic label maps. It is highly relevant for tasks like content generation and image editing. Current state-of-the-art approaches, however, still struggle to generate realistic objects in images at various scales. In particular, small objects tend to fade away and large objects are often generated as collages of patches. In order to address this issue, we propose a Dual Pyramid Generative Adversarial Network (DP… 

Location-aware Adaptive Denormalization: A Deep Learning Approach For Wildfire Danger Forecasting

A two-branch architecture with a Location-aware Adaptive Denormalization layer (LOADE) that can modulate the dynamic features conditional on their geographical location and an absolute temporal encoding for time-related forecasting problems are proposed.

References

SHOWING 1-10 OF 33 REFERENCES

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Learning to Predict Layout-to-image Conditional Convolutions for Semantic Image Synthesis

This work argues that convolutional kernels in the generator should be aware of the distinct semantic labels at different locations when generating images, and proposes a feature pyramid semantics-embedding discriminator, which is more effective in enhancing fine details and semantic alignments between the generated images and the input semantic layouts.

You Only Need Adversarial Supervision for Semantic Image Synthesis

This work proposes a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results, and re-designs the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training.

OASIS: Only Adversarial Supervision for Semantic Image Synthesis

A novel, simplified GAN model is proposed that achieves a strong improvement in image synthesis quality over prior state-of-the-art models across the commonly used ADE20K, Cityscapes, and COCO-Stuff datasets using only adversarial supervision.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Dual Attention GANs for Semantic Image Synthesis

A novel Dual Attention GAN (DAGAN) is proposed to synthesize photo-realistic and semantically-consistent images with fine details from the input layouts without imposing extra training overhead or modifying the network architectures of existing methods.

Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

This work considers learning the scene generation in a local context, and correspondingly design a local class-specific generative network with semantic maps as a guidance, which separately constructs and learns sub-generators concentrating on the generation of different classes and is able to provide more scene details.

Photographic Image Synthesis with Cascaded Refinement Networks

  • Qifeng ChenV. Koltun
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
It is shown that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective.

A Style-Based Generator Architecture for Generative Adversarial Networks

An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.