Corpus ID: 220363797

Learning from Failure: Training Debiased Classifier from Biased Classifier

@article{Nam2020LearningFF,
  title={Learning from Failure: Training Debiased Classifier from Biased Classifier},
  author={J. Nam and Hyuntak Cha and Sungsoo Ahn and Jaeho Lee and Jinwoo Shin},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.02561}
}
Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased. While previous work tackles this issue with domain-specific knowledge or explicit supervision on the spuriously correlated attributes, we instead tackle a more challenging setting where such information is unavailable. To this end, we first observe that neural networks learn to rely on the spurious correlation only when it is ''easier'' to learn… Expand
An Investigation of Critical Issues in Bias Mitigation Techniques
TLDR
The community is urged to adopt more rigorous assessment of future bias mitigation methods after it is found that algorithms exploit hidden biases, are unable to scale to multiple forms of bias, and are highly sensitive to the choice of tuning set. Expand
BiaSwap: Removing dataset bias with bias-tailored swapping augmentation
Deep neural networks often make decisions based on the spurious correlations inherent in the dataset, failing to generalize in an unbiased data distribution. Although previous approaches pre-defineExpand
Environment Inference for Invariant Learning
TLDR
EIIL is proposed, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning and establishes connections between EIIL and algorithmic fairness. Expand
Learning Debiased Representation via Disentangled Feature Augmentation
TLDR
This paper proposes a novel feature-level data augmentation technique that synthesizes bias-conflicting samples that contain the diverse intrinsic attributes of bias-aligned samples by swapping their latent features. Expand
Latent Adversarial Debiasing: Mitigating Collider Bias in Deep Neural Networks
TLDR
It is argued herein that the cause of failure is a combination of the deep structure of neural networks and the greedy gradient-driven learning process used – one that prefers easyto-compute signals when available. Expand
EnD: Entangling and Disentangling deep representations for bias correction
TLDR
EnD, a regularization strategy whose aim is to prevent deep models from learning unwanted biases, is proposed, which effectively improves the generalization on unbiased test sets, and it can be effectively applied on realcase scenarios, like removing hidden biases in the COVID19 detection from radiographic images. Expand
Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions with Superior OOD Generalization
TLDR
It is demonstrated that the simplicity bias can be mitigated and OOD generalization improved, and the method – the first to evade the complexity bias – highlights the need for a better understanding and control of inductive biases in deep learning. Expand
Just Train Twice: Improving Group Robustness without Training Group Information
TLDR
This paper proposes a simple two-stage approach, JTT, that minimizes the loss over a reweighted dataset where the authors upweight training examples that are misclassified at the end of a few steps of standard training, leading to improved worst-group performance. Expand
Learning Bias-Invariant Representation by Cross-Sample Mutual Information Minimization
  • Wei Zhu, Haitian Zheng, Haofu Liao, Weijian Li, Jiebo Luo
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work proposes to remove the bias information misused by the target task with a crosssample adversarial debiasing (CSAD) method and proposes joint content and local structural representation learning to boost mutual information estimation for better performance. Expand
Unsupervised Learning of Debiased Representations with Pseudo-Attributes
TLDR
This work proposes a simple but effective debiasing technique in an unsupervised manner that performs clustering on the feature embedding space and identifies pseudoattributes by taking advantage of the clustering results even without an explicit attribute supervision. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 30 REFERENCES
Learning Not to Learn: Training Deep Neural Networks With Biased Data
TLDR
A novel regularization algorithm to train deep neural networks, in which data at training time is severely biased, and an iterative algorithm to unlearn the bias information is proposed. Expand
Learning Robust Representations by Projecting Superficial Statistics Out
TLDR
This work aims to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training, and incorporates the gray-level co-occurrence matrix (GLCM) to extract patterns that prior knowledge suggests are superficial. Expand
Robust Inference via Generative Classifiers for Handling Noisy Labels
TLDR
This work proposes a novel inference method, termed Robust Generative classifier (RoG), applicable to any discriminative neural classifier pre-trained on noisy datasets, and proves that RoG generalizes better than baselines under noisy labels. Expand
REPAIR: Removing Representation Bias by Dataset Resampling
  • Y. Li, N. Vasconcelos
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
Experiments with synthetic and action recognition data show that dataset REPAIR can significantly reduce representation bias, and lead to improved generalization of models trained on REPAired datasets. Expand
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
TLDR
The results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization, and introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
TLDR
A theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE are presented and can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. Expand
A Closer Look at Memorization in Deep Networks
TLDR
The analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization. Expand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
...
1
2
3
...