Corpus ID: 53212997

Robustness of Conditional GANs to Noisy Labels

@inproceedings{Thekumparampil2018RobustnessOC,
  title={Robustness of Conditional GANs to Noisy Labels},
  author={K. K. Thekumparampil and Ashish Khetan and Zinan Lin and Sewoong Oh},
  booktitle={NeurIPS},
  year={2018}
}
We study the problem of learning conditional generators from noisy labeled samples, where the labels are corrupted by random noise. [...] Key Method This approach of passing through a matching noisy channel is justified by corresponding multiplicative approximation bounds between the loss of the RCGAN and the distance between the clean real distribution and the generator distribution. This shows that the proposed approach is robust, when used with a carefully chosen discriminator architecture, known as…Expand
Robust conditional GANs under missing or uncertain labels
TLDR
A new training algorithm is designed, which is robust to missing or ambiguous labels, to intentionally corrupt the labels of generated examples to match the statistics of the real data, and have a discriminator process the real and generated examples with corrupted labels. Expand
GANs for learning from very high class conditional noisy labels
TLDR
Using Friedman F test and Nemenyi posthoc test, it is shown that on high dimensional binary class synthetic, MNIST and Fashion MNIST datasets, the GAN schemes outperform the existing methods and demonstrate consistent performance across noise rates. Expand
Noise Robust Generative Adversarial Networks
  • Takuhiro Kaneko, T. Harada
  • Computer Science, Engineering
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
Noise robust GANs are proposed, which can learn a clean image generator even when training images are noisy, and the applicability of NR-GANs in image denoising is shown, which demonstrates the effectiveness of the network in noise robust image generation. Expand
Generative Pseudo-label Refinement for Unsupervised Domain Adaptation
We investigate and characterize the inherent resilience of conditional Generative Adversarial Networks (cGANs) against noise in their conditioning labels, and exploit this fact in the context ofExpand
Robust Generative Adversarial Network
TLDR
This work designs a robust optimization framework where the generator and discriminator compete with each other in a \textit{worst-case} setting within a small Wasserstein ball, and proves that the robustness in small neighborhood of training sets can lead to better generalization. Expand
RoCGAN: Robust Conditional GAN
TLDR
This work introduces a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue of noise, and augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise. Expand
Discrimination ! ( # ) Adversary : Contaminated channel , Dishonest discriminator , Privacy constraint , etc Φ ( ! # ) Generation #
Robustness of deep learning models is a property that has recently gained increasing attention. We explore a notion of robustness for generative adversarial models that is pertinent to their internalExpand
Robust GANs against Dishonest Adversaries
TLDR
A notion of robustness for generative adversarial models is formally defined, and it is shown that the GAN in its original form is not robust; variations of GANs are suggested that are indeed more robust to noisy attacks, and have overall more stable training behavior. Expand
Multi-Level Generative Models for Partial Label Learning with Non-random Label Noise
TLDR
A novel multi-level generative model for partial label learning (MGPLL), which tackles the problem by learning both a label level adversarial generator and a feature level adversaria generator under a bi-directional mapping framework between the label vectors and the data samples. Expand
ExpertNet: Adversarial Learning and Recovery Against Noisy Labels
TLDR
This paper proposes a novel framework, ExpertNet, composed of Amateur and Expert, which iteratively learn from each other and can achieve robust classification against a wide range of noise ratios and with as little as 20-50% training data, compared to state-of-the-art deep models that solely focus on distilling the impact of noisy labels. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 60 REFERENCES
Robust GANs against Dishonest Adversaries
TLDR
A notion of robustness for generative adversarial models is formally defined, and it is shown that the GAN in its original form is not robust; variations of GANs are suggested that are indeed more robust to noisy attacks, and have overall more stable training behavior. Expand
On the Discrimination-Generalization Tradeoff in GANs
TLDR
This paper shows that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions, and develops generalization bounds between the learned distribution and true distribution under different evaluation metrics. Expand
Some Theoretical Properties of GANs
TLDR
The deep connection between the adversarial principle underlying GANs and the Jensen-Shannon divergence is studied, together with some optimality characteristics of the problem. Expand
Approximability of Discriminators Implies Diversity in GANs
TLDR
It is shown in this paper that GANs can in principle learn distributions in Wasserstein distance with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). Expand
PacGAN: The Power of Two Samples in Generative Adversarial Networks
TLDR
It is shown that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process, and numerical experiments suggests that packing provides significant improvements in practice as well. Expand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Learning from Simulated and Unsupervised Images through Adversarial Training
TLDR
This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training. Expand
Learning with Noisy Labels
TLDR
The problem of binary classification in the presence of random classification noise is theoretically studied—the learner sees labels that have independently been flipped with some small probability, and methods used in practice such as biased SVM and weighted logistic regression are provably noise-tolerant. Expand
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
TLDR
This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. Expand
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
...
1
2
3
4
5
...