• Corpus ID: 246633986

Diversify and Disambiguate: Learning From Underspecified Data

@article{Lee2022DiversifyAD,
  title={Diversify and Disambiguate: Learning From Underspecified Data},
  author={Yoonho Lee and Huaxiu Yao and Chelsea Finn},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.03418}
}
Many datasets are underspecified, which means there are several equally viable solutions for the data. Underspecified datasets can be problematic for methods that learn a single hypothesis because different functions that achieve low training loss can focus on different predictive features and thus have widely varying predictions on outof-distribution data. We propose DivDis, a simple two-stage framework that first learns a diverse collection of hypotheses for a task by leveraging unlabeled… 
OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses
TLDR
Modifying the network architecture to impose inductive biases that make the network robust to dataset bias is proposed, which is biased to favor simpler solutions by design and demonstrates that when the state-of-the-art debiasing methods are combined with OccamNets 4 results further improve.
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
TLDR
It is demonstrated that simple last layer retraining on large ImageNet-trained models can match or outperform state-of-the-art approaches on spurious correlation benchmarks, but with profoundly lower complexity and computational expenses.
Diverse Weight Averaging for Out-of-Distribution Generalization
TLDR
Diverse Weight Averaging is proposed that makes a simple change to this strategy: DiWA averages the weights obtained from several independent training runs rather than from a single run, and highlights the need for diversity by a new bias-variance-covariance-locality decomposition of the expected error.

References

SHOWING 1-10 OF 47 REFERENCES
Annotation Artifacts in Natural Language Inference Data
TLDR
It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes.
Learning from Failure: Training Debiased Classifier from Biased Classifier
TLDR
This work intentionally train the first network to be biased by repeatedly amplifying its ''prejudice'', and debias the training of the second network by focusing on samples that go against the prejudice of the biased network in (a).
Underspecification Presents Challenges for Credibility in Modern Machine Learning
TLDR
This work shows the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain, and shows that this problem appears in a wide variety of practical ML pipelines.
Environment Inference for Invariant Learning
TLDR
EIIL is proposed, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning and establishes connections between EIIL and algorithmic fairness.
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective
TLDR
A training setup with several shortcut cues, named WCST-ML, where the bias of models leans toward simple cues, such as color and ethnicity, and the abundance of certain cues is explained via their Kolmogorov (descriptional) complexity: solutions corresponding to Kolmogsorov-simple cues are abundant in the parameter space and are thus preferred by DNNs.
WILDS: A Benchmark of in-the-Wild Distribution Shifts
TLDR
WILDS is presented, a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, and is hoped to encourage the development of general-purpose methods that are anchored to real-world distribution shifts and that work well across different applications and problem settings.
Just Train Twice: Improving Group Robustness without Training Group Information
TLDR
This paper proposes a simple two-stage approach, JTT, that minimizes the loss over a reweighted dataset where the authors upweight training examples that are misclassified at the end of a few steps of standard training, leading to improved worst-group performance.
Shortcut Learning in Deep Neural Networks
TLDR
A set of recommendations for model interpretation and benchmarking is developed, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
TLDR
The results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization, and introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
Theory of Disagreement-Based Active Learning
TLDR
Recent advances in the understanding of the theoretical benefits of active learning are described, and implications for the design of effective active learning algorithms are described.
...
...