# Label-Noise Robust Generative Adversarial Networks

@article{Kaneko2019LabelNoiseRG,
author={Takuhiro Kaneko and Y. Ushiku and Tatsuya Harada},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019},
pages={2462-2471}
}
• Published 27 November 2018
• Computer Science
• 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Generative adversarial networks (GANs) are a framework that learns a generative distribution through adversarial training. Recently, their class conditional extensions (e.g., conditional GAN (cGAN) and auxiliary classifier GAN (AC-GAN)) have attracted much attention owing to their ability to learn the disentangled representations and to improve the training stability. However, their training requires the availability of large-scale accurate class-labeled data, which are often laborious or…
44 Citations

## Figures and Tables from this paper

### Noise Robust Generative Adversarial Networks

• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
Noise robust GANs are proposed, which can learn a clean image generator even when training images are noisy, and the applicability of NR-GANs in image denoising is shown, which demonstrates the effectiveness of the network in noise robust image generation.

• Computer Science
ArXiv
• 2020
This work designs a robust optimization framework where the generator and discriminator compete with each other in a \textit{worst-case} setting within a small Wasserstein ball, and proves that the robustness in small neighborhood of training sets can lead to better generalization.

### Learning Fast Converging, Effective Conditional Generative Adversarial Networks with a Mirrored Auxiliary Classifier

• Z. Wang
• Computer Science
2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
• 2021
This paper proposes a novel conditional GAN architecture with a mirrored auxiliary classifier (MAC-GAN) in its discriminator for the purpose of label conditioning, which improves the quality of image synthesis compared with state-of-the-art approaches.

### Regularizing Generative Adversarial Networks under Limited Data

• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
This work proposes a regularization approach for training robust GAN models on limited data and theoretically shows a connection between the regularized loss and an f-divergence called LeCam-Divergence, which is more robust under limited training data.

### GANs for learning from very high class conditional noisy labels

• Computer Science
ArXiv
• 2020
Using Friedman F test and Nemenyi posthoc test, it is shown that on high dimensional binary class synthetic, MNIST and Fashion MNIST datasets, the GAN schemes outperform the existing methods and demonstrate consistent performance across noise rates.

### RoCGAN: Robust Conditional GAN

• Computer Science
International Journal of Computer Vision
• 2020
This work introduces a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue of noise, and augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise.

### Blur, Noise, and Compression Robust Generative Adversarial Networks

• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
This work proposes blur, noise, and compression robust GAN (BNCR-GAN) that can learn a clean image generator directly from degraded images without knowledge of degradation parameters, and introduces masking architectures adjusting degradation strength values in a data-driven manner using bypasses before and after degradation.

• Computer Science
ArXiv
• 2019
A novel adversarial learning model, PML-GAN, under a generalized encoder-decoder framework for partial multi-label learning is proposed, which enhances the correspondence of input features with the output labels in a bi-directional mapping.

### Adversarial Partial Multi-Label Learning with Label Disambiguation

• Computer Science
AAAI
• 2021
A novel adversarial learning model, PML-GAN, under a generalized encoder-decoder framework for partial multi-label learning is proposed, which enhances the correspondence of input features with the output labels in a bi-directional mapping.

### ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

• Computer Science
ArXiv
• 2020
This paper proposes a novel framework, ExpertNet, composed of Amateur and Expert, which iteratively learn from each other and can achieve robust classification against a wide range of noise ratios and with as little as 20-50% training data, compared to state-of-the-art deep models that solely focus on distilling the impact of noisy labels.

## References

SHOWING 1-10 OF 98 REFERENCES

### Improved Training of Wasserstein GANs

• Computer Science
NIPS
• 2017
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

• Computer Science
ICML
• 2019
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.

### Improved Techniques for Training GANs

• Computer Science
NIPS
• 2016
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

### Least Squares Generative Adversarial Networks

• Computer Science
2017 IEEE International Conference on Computer Vision (ICCV)
• 2017
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.

### Learning from Simulated and Unsupervised Images through Adversarial Training

• Computer Science
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training.

### Learning What and Where to Draw

• Computer Science
NIPS
• 2016
This work proposes a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location, and shows high-quality 128 x 128 image synthesis on the Caltech-UCSD Birds dataset.

• Computer Science
ArXiv
• 2014
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.

### Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect

• Computer Science
ICLR
• 2018
This paper proposes a novel approach to enforcing the Lipschitz continuity in the training procedure of WGANs, which gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results.

### mixup: Beyond Empirical Risk Minimization

• Computer Science
ICLR
• 2018
This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.