• Publications
  • Influence
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling.
Noise2Noise: Learning Image Restoration without Clean Data
TLDR
It is shown that under certain common circumstances, it is possible to learn to restore signals without ever observing clean ones, at performance close or equal to training using clean exemplars.
Training Generative Adversarial Networks with Limited Data
TLDR
It is demonstrated, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images, and is expected to open up new application domains for GANs.
GANSpace: Discovering Interpretable GAN Controls
TLDR
This paper describes a simple technique to analyze Generative Adversarial Networks and create interpretable controls for image synthesis, and shows that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner.
Few-Shot Unsupervised Image-to-Image Translation
TLDR
This model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design, and verifies the effectiveness of the proposed framework through extensive experimental validation and comparisons to several baseline methods on benchmark datasets.
Improved Precision and Recall Metric for Assessing Generative Models
TLDR
This work presents an evaluation metric that can separately and reliably measure both the quality and coverage of the samples produced by a generative model and the perceptual quality of individual samples, and extends it to study latent space interpolations.
High-Quality Self-Supervised Deep Image Denoising
TLDR
This work builds on a recent technique that removes the need for reference data by employing networks with a "blind spot" in the receptive field, and significantly improves two key aspects: image quality and training efficiency.
Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
TLDR
A differentiable rendering framework which allows gradients to be analytically computed for all pixels in an image and to view foreground rasterization as a weighted interpolation of local properties and background rasterized as a distance-based aggregation of global geometry.
Audio-driven facial animation by joint end-to-end learning of pose and emotion
TLDR
This work presents a machine learning technique for driving 3D facial animation by audio input in real time and with low latency, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone.
...
1
2
3
4
5
...