• Corpus ID: 235606261

Alias-Free Generative Adversarial Networks

@inproceedings{Karras2021AliasFreeGA,
  title={Alias-Free Generative Adversarial Networks},
  author={Tero Karras and Miika Aittala and Samuli Laine and Erik H{\"a}rk{\"o}nen and Janne Hellsten and Jaakko Lehtinen and Timo Aila},
  booktitle={Neural Information Processing Systems},
  year={2021}
}
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally… 

Anisotropic multiresolution analyses for deep fake detection

It is argued that, since GANs primarily utilize isotropic convolutions to generate their output, they leave clear traces in the coefficient distribution on sub-bands extracted by anisotropic transformations, and the fully separable transform capable of improving the state-of-the-art is employed.

Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks

A way to use the features it has learned from the training dataset to both alter an image and generate one from scratch after training is complete is proposed.

OASIS: Only Adversarial Supervision for Semantic Image Synthesis

A novel, simplified GAN model is proposed that achieves a strong improvement in image synthesis quality over prior state-of-the-art models across the commonly used ADE20K, Cityscapes, and COCO-Stuff datasets using only adversarial supervision.

Unsupervised Image Transformation Learning via Generative Adversarial Networks

This work proposes a novel learning framework built on generative adversarial networks (GANs), where the discriminator and the generator share a transformation space and manages to adequately extract the variation factor between a customizable image pair by projecting both images onto the transformation space.

High-fidelity GAN Inversion with Padding Space

This work proposes to involve the padding space of the generator to complement the latent space with spatial information, and replaces the constant padding used in convolution layers with some instance-aware coefficients to improve the inversion quality both qualitatively and quantitatively.

Unifying conditional and unconditional semantic image synthesis with OCO-GAN

This work proposes OCO-GAN, for Optionally COnditioned GAN, which addresses both Generative image models in a unified manner, with a shared image synthesis network that can be conditioned either on semantic maps or directly on latents.

Distilling Representations from GAN Generator via Squeeze and Span

This paper squeezes the generator features into representations that are invariant to semantic-preserving transformations through a network before they are distilled into the student network and spans the distilled representation of the synthetic domain to the real domain.

EGAIN: Extended GAn INversion

This architecture explicitly addresses some of the shortcomings in previous GAN inversion models, and is presented, demonstrating superior reconstruction quality over state-of-the-art models and illustrating the validity of the EGAIN architecture.

Interpreting Generative Adversarial Networks for Interactive Image Generation

This chapter gives a summary of recent works on interpreting deep generative models and sees how the humanunderstandable concepts that emerge in the learned representation can be identified and used for interactive image generation and editing.

LatentKeypointGAN: Controlling GANs via Latent Keypoints

LatentKeypointGAN is introduced, a two-stage GAN that is trained endto-end on the classical GAN objective yet internally conditioned on a set of sparse keypoints with associated appearance embeddings that respectively control the position and style of the generated objects and their parts.
...

References

SHOWING 1-10 OF 73 REFERENCES

Group Equivariant Generative Adversarial Networks

This work improves gradient feedback between generator and discriminator using an inductive symmetry prior via group-equivariant convolutional networks, allowing for better optimization steps and increased expressive power with limited samples.

Image Generators with Conditionally-Independent Pixel Synthesis

This work presents a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel, and investigates several interesting properties unique to the new architecture.

A Style-Based Generator Architecture for Generative Adversarial Networks

An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.

On the "steerability" of generative adversarial networks

It is shown that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold, and it is hypothesized that the degree of distributional shift is related to the breadth of the training data distribution.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Self-Attention Generative Adversarial Networks

The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.

SWAGAN: A Style-based Wavelet-driven Generative Model

A novel general-purpose Style and WAvelet based GAN (SWAGAN) that implements progressive generation in the frequency domain that retains the qualities that allow StyleGAN to serve as a basis for a multitude of editing tasks and induces improved downstream visual quality.

GANSpace: Discovering Interpretable GAN Controls

This paper describes a simple technique to analyze Generative Adversarial Networks and create interpretable controls for image synthesis, and shows that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner.

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks

This work presents an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level, and provides open source interpretation tools to help researchers and practitioners better understand their GAN models.
...