Alias-Free Generative Adversarial Networks
@inproceedings{Karras2021AliasFreeGA, title={Alias-Free Generative Adversarial Networks}, author={Tero Karras and Miika Aittala and Samuli Laine and Erik H{\"a}rk{\"o}nen and Janne Hellsten and Jaakko Lehtinen and Timo Aila}, booktitle={Neural Information Processing Systems}, year={2021} }
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally…
Figures from this paper
487 Citations
Generative Adversarial Networks
- Computer ScienceArXiv
- 2022
This chapter gives an introduction to GANs, by discussing their principle mechanism and presenting some of their inherent problems during training and evaluation, including mode collapse, vanishing gradients, and generation of low-quality images.
Anisotropic multiresolution analyses for deep fake detection
- Computer ScienceArXiv
- 2022
It is argued that, since GANs primarily utilize isotropic convolutions to generate their output, they leave clear traces in the coefficient distribution on sub-bands extracted by anisotropic transformations, and the fully separable transform capable of improving the state-of-the-art is employed.
Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks
- Computer ScienceArXiv
- 2021
A way to use the features it has learned from the training dataset to both alter an image and generate one from scratch after training is complete is proposed.
OASIS: Only Adversarial Supervision for Semantic Image Synthesis
- Computer ScienceInternational Journal of Computer Vision
- 2022
A novel, simplified GAN model is proposed that achieves a strong improvement in image synthesis quality over prior state-of-the-art models across the commonly used ADE20K, Cityscapes, and COCO-Stuff datasets using only adversarial supervision.
Adversarially Slicing Generative Networks: Discriminator Slices Feature for One-Dimensional Optimal Transport
- Computer Science
- 2023
This paper derivescient conditions for the discriminator to serve as the distance between the distributions by con-necting the GAN formulation with the concept of sliced optimal transport, and proposes a novel GAN training scheme, called adversarially slicing generative network (ASGN), with only simple modifications.
Unsupervised Image Transformation Learning via Generative Adversarial Networks
- Computer ScienceArXiv
- 2021
This work proposes a novel learning framework built on generative adversarial networks (GANs), where the discriminator and the generator share a transformation space and manages to adequately extract the variation factor between a customizable image pair by projecting both images onto the transformation space.
High-fidelity GAN Inversion with Padding Space
- Computer ScienceECCV
- 2022
This work proposes to involve the padding space of the generator to complement the latent space with spatial information, and replaces the constant padding used in convolution layers with some instance-aware coefficients to improve the inversion quality both qualitatively and quantitatively.
EGAIN: Extended GAn INversion
- Computer Science2022 10th European Workshop on Visual Information Processing (EUVIP)
- 2022
This architecture explicitly addresses some of the shortcomings in previous GAN inversion models, and is presented, demonstrating superior reconstruction quality over state-of-the-art models and illustrating the validity of the EGAIN architecture.
A Survey on Leveraging Pre-trained Generative Adversarial Networks for Image Editing and Restoration
- Computer ScienceArXiv
- 2022
Recent progress on leveraging pre-trained large-scale GAN models from three aspects are reviewed, i.e .
Interpreting Generative Adversarial Networks for Interactive Image Generation
- Computer SciencexxAI@ICML
- 2020
This chapter gives a summary of recent works on interpreting deep generative models and sees how the humanunderstandable concepts that emerge in the learned representation can be identified and used for interactive image generation and editing.
References
SHOWING 1-10 OF 73 REFERENCES
Group Equivariant Generative Adversarial Networks
- Computer ScienceICLR
- 2021
This work improves gradient feedback between generator and discriminator using an inductive symmetry prior via group-equivariant convolutional networks, allowing for better optimization steps and increased expressive power with limited samples.
Image Generators with Conditionally-Independent Pixel Synthesis
- Mathematics2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This work presents a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel, and investigates several interesting properties unique to the new architecture.
A Style-Based Generator Architecture for Generative Adversarial Networks
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
- Computer ScienceICLR
- 2019
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.
Self-Attention Generative Adversarial Networks
- Computer ScienceICML
- 2019
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.
SWAGAN: A Style-based Wavelet-driven Generative Model
- Computer ScienceACM Trans. Graph.
- 2021
A novel general-purpose Style and WAvelet based GAN (SWAGAN) that implements progressive generation in the frequency domain that retains the qualities that allow StyleGAN to serve as a basis for a multitude of editing tasks and induces improved downstream visual quality.
GANSpace: Discovering Interpretable GAN Controls
- Computer ScienceNeurIPS
- 2020
This paper describes a simple technique to analyze Generative Adversarial Networks and create interpretable controls for image synthesis, and shows that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner.
Training Generative Adversarial Networks with Limited Data
- Computer ScienceNeurIPS
- 2020
It is demonstrated, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images, and is expected to open up new application domains for GANs.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
- Computer ScienceICLR
- 2018
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.