Noise Robust Generative Adversarial Networks

@article{Kaneko2020NoiseRG,
  title={Noise Robust Generative Adversarial Networks},
  author={Takuhiro Kaneko and Tatsuya Harada},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={8401-8411}
}
  • Takuhiro KanekoT. Harada
  • Published 26 November 2019
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Generative adversarial networks (GANs) are neural networks that learn data distributions through adversarial training. In intensive studies, recent GANs have shown promising results for reproducing training images. However, in spite of noise, they reproduce images with fidelity. As an alternative, we propose a novel family of GANs called noise robust GANs (NR-GANs), which can learn a clean image generator even when training images are noisy. In particular, NR-GANs can solve this problem without… 

Figures and Tables from this paper

Blur, Noise, and Compression Robust Generative Adversarial Networks

  • Takuhiro KanekoT. Harada
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
This work proposes blur, noise, and compression robust GAN (BNCR-GAN) that can learn a clean image generator directly from degraded images without knowledge of degradation parameters, and introduces masking architectures adjusting degradation strength values in a data-driven manner using bypasses before and after degradation.

DGL-GAN: Discriminator Guided Learning for GAN Compression

A novel yet simple Discriminator Guided Learning approach for compressing vanilla GAN, dubbed DGL-GAN, which is valid since empirically, learning from the teacher discriminator could facilitate the performance of student GANs and achieves state-of-the-art results.

Regularizing Generative Adversarial Networks under Limited Data

This work proposes a regularization approach for training robust GAN models on limited data and theoretically shows a connection between the regularized loss and an f-divergence called LeCam-Divergence, which is more robust under limited training data.

Influence Estimation for Generative Adversarial Networks

An influence estimation method that uses the Jacobian of the gradient of the generator’s loss with respect to the discriminator's parameters and a novel evaluation scheme, in which the harmfulness of each training instance is assessed on the basis of how GAN evaluation metric is expected to change due to the removal of the instance, are proposed.

Adaptive noise imitation for image denoising

A new Adaptive noise imitation (ADANI) algorithm is developed that can synthesize noisy data from naturally noisy images and is competitive to other networks trained with external paired data.

Unsupervised Learning of Depth and Depth-of-Field Effect from Natural Images with Aperture Rendering Generative Adversarial Networks

  • Takuhiro Kaneko
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
DoF mixture learning is developed, which enables the generator to learn real image distribution while generating diverse DoF images and devise a center focus prior to guiding the learning direction to address the ambiguities triggered by unsupervised setting.

INFLUENCE ESTIMATION FOR GENERATIVE ADVER-

An influence estimation method that uses the Jacobian of the gradient of the generator’s loss with respect to the discriminator's parameters and a novel evaluation scheme, in which each training instance is assessed on the basis of how GAN evaluation metric is expected to change due to the removal of the instance, are proposed.

Robust Vector Quantized-Variational Autoencoder

A robust generative model based on VQ-VAE, which is able to generate examples from inliers even if a large portion of the training data points are corrupted, is proposed and experimentally demonstrated.

AR-NeRF: Unsupervised Learning of Depth and Defocus Effects from Natural Images with Aperture Rendering Neural Radiance Fields

  • Takuhiro Kaneko
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
An aperture rendering NeRF (AR-NeRF) is proposed, which can utilize viewpoint and defocus cues in a unified manner by representing both factors in a common ray-tracing framework and is applied to various natural image datasets, the results demonstrate the utility of AR-Ne RF for un-supervised learning of the depth anddefocus effects.

Noise2Grad: Extract Image Noise to Denoise

The proposed method to form the necessary supervision is to extract the noise from the noisy image to synthesize new data, and uses a noise removal module to aid noise extraction to ease the interference of the image background.

References

SHOWING 1-10 OF 90 REFERENCES

Label-Noise Robust Generative Adversarial Networks

This work proposes a novel family of GANs called label-noise robust GGANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even when training labels are noisy.

Robustness of Conditional GANs to Noisy Labels

The main idea is to corrupt the label of the generated sample before feeding to the adversarial discriminator, forcing the generator to produce samples with clean labels, and the proposed approach is robust, when used with a carefully chosen discriminator architecture.

Image Blind Denoising with Generative Adversarial Network Based Noise Modeling

A novel two-step framework is proposed, in which a Generative Adversarial Network is trained to estimate the noise distribution over the input noisy images and to generate noise samples to train a deep Convolutional Neural Network for denoising.

On Self Modulation for Generative Adversarial Networks

This work proposes and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings.

Least Squares Generative Adversarial Networks

This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.

Improved Training of Wasserstein GANs

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Diversity-Sensitive Conditional Generative Adversarial Networks

It is shown that simple addition of the proposed regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.

Self-Attention Generative Adversarial Networks

The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.

Self-Supervised GANs via Auxiliary Rotation Loss

This work allows the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game, and takes a step towards bridging the gap between conditional and unconditional GANs.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
...