DRAN: Detailed Region-Adaptive Normalization for Conditional Image Synthesis
@inproceedings{Lyu2021DRANDR, title={DRAN: Detailed Region-Adaptive Normalization for Conditional Image Synthesis}, author={Yueming Lyu and P. Chen and Jingna Sun and Xu Wang and Jing Dong and Tieniu Tan}, year={2021} }
In recent years, conditional image synthesis has attracted growing attention due to its controllability in the image generation process. Although recent works have achieved realistic results, most of them fail to handle finegrained styles with subtle details. To address this problem, a novel normalization module, named DRAN, is proposed. It learns fine-grained style representation, while maintaining the robustness to general styles. Specifically, we first introduce a multi-level structure…
Figures and Tables from this paper
References
SHOWING 1-10 OF 42 REFERENCES
SEAN: Image Synthesis With Semantic Region-Adaptive Normalization
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
We propose semantic region-adaptive normalization (SEAN), a simple but effective building block for Generative Adversarial Networks conditioned on segmentation masks that describe the semantic…
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.
Region-aware Adaptive Instance Normalization for Image Harmonization
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This paper proposes a simple yet effective Region-aware Adaptive Instance Normalization (RAIN) module, which explicitly formulates the visual style from the background and adaptively applies them to the foreground.
Diverse Semantic Image Synthesis via Probability Distribution Modeling
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This paper proposes a novel diverse semantic image synthesis framework from the perspective of semantic class distributions, which naturally supports diverse generation at semantic or even instance level by modeling class-level conditional modulation parameters as continuous probability distributions instead of discrete values.
Controllable Person Image Synthesis With Attribute-Decomposed GAN
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
The Attribute-Decomposed GAN is introduced, a novel generative model for controllable person image synthesis, which can produce realistic person images with desired human attributes provided in various source inputs and its superiority over the state of the art in pose transfer and its effectiveness in the brand-new task of component attribute transfer.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
- Computer ScienceICLR
- 2019
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
Semantically Multi-Modal Image Synthesis
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
A novel Group Decreasing Network (GroupDNet) is proposed that leverages group convolution in the generator and progressively decreases the group numbers of the convolutions in the decoder, which is armed with much more controllability on translating semantic labels to natural images and has plausible high-quality yields for datasets with many classes.
Spatially-invariant Style-codes Controlled Makeup Transfer
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This paper proposes a Style-based Controllable GAN model that consists of three components, each of which corresponds to target style-code encoding, face identity features extraction, and makeup fusion, respectively, and demonstrates great flexibility on makeup transfer by supporting makeup removal, shade-controllable makeup transfer, and part-specific makeupTransfer, even with large spatial misalignment.
MaskGAN: Towards Diverse and Interactive Facial Image Manipulation
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This work proposes a novel framework termed MaskGAN, enabling diverse and interactive face manipulation, and finds that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation.
Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This paper presents a simple yet effective approach that for the first time enables arbitrary style transfer in real-time, comparable to the fastest existing approach, without the restriction to a pre-defined set of styles.