Corpus ID: 235606261

Alias-Free Generative Adversarial Networks

@article{Karras2021AliasFreeGA,
  title={Alias-Free Generative Adversarial Networks},
  author={Tero Karras and Miika Aittala and Samuli Laine and Erik Harkonen and Janne Hellsten and Jaakko Lehtinen and Timo Aila},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.12423}
}
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally… Expand
Interpreting Generative Adversarial Networks for Interactive Image Generation
  • Bolei Zhou
  • Computer Science
  • ArXiv
  • 2021
TLDR
This chapter will give a summary of recent works on interpreting deep generative models and see how the human-understandable concepts that emerge in the learned representation can be identified and used for interactive image generation and editing. Expand
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
TLDR
Leveraging the semantic power of large scale Contrastive-Language-Image-Pretraining (CLIP) models, this work presents a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image from those domains. Expand
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing
  • Bingchuan Li, Shaofei Cai, +4 authors Zili Yi
  • Computer Science
  • ArXiv
  • 2021
TLDR
A Dynamic Style Manipulation Network (DyStyle) whose structure and parameters vary by input samples, to perform nonlinear and adaptive manipulation of latent codes for flexible and precise attribute control. Expand
Eyes Tell All: Irregular Pupil Shapes Reveal GAN-generated Faces
  • Hui Guo, Shu Hu, Xin Wang, Ming-Ching Chang, Siwei Lyu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work shows that GAN-generated faces can be exposed via irregular pupil shapes, and describes an automatic method to extract the pupils from two eyes and analysis their shapes for exposing the GAN -generated faces. Expand
FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold
Fig. 1. We introduce a new approach that generates an image with StyleGAN defined by a precise 3D camera. This enables faces synthesized with StyleGAN to be used in 3D free-viewpoint rendering, whileExpand
GAN Inversion: A Survey
TLDR
This paper provides an overview of GAN inversion with a focus on its recent algorithms and applications and further elaborate on some trends and challenges for future directions. Expand
LatentKeypointGAN: Controlling GANs via Latent Keypoints
TLDR
LatentKeypointGAN is introduced, a two-stage GAN that is trained endto-end on the classical GAN objective yet internally conditioned on a set of sparse keypoints with associated appearance embeddings that respectively control the position and style of the generated objects and their parts. Expand

References

SHOWING 1-10 OF 77 REFERENCES
Image Generators with Conditionally-Independent Pixel Synthesis
TLDR
This work presents a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel, and investigates several interesting properties unique to the new architecture. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
On the "steerability" of generative adversarial networks
TLDR
It is shown that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold, and it is hypothesized that the degree of distributional shift is related to the breadth of the training data distribution. Expand
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Expand
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
TLDR
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing. Expand
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Expand
SWAGAN: A Style-based Wavelet-driven Generative Model
TLDR
A novel general-purpose Style and WAvelet based GAN (SWAGAN) that implements progressive generation in the frequency domain that retains the qualities that allow StyleGAN to serve as a basis for a multitude of editing tasks and induces improved downstream visual quality. Expand
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
TLDR
This work presents an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level, and provides open source interpretation tools to help researchers and practitioners better understand their GAN models. Expand
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality. Expand
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling. Expand
...
1
2
3
4
5
...