• Corpus ID: 232134901

# Loss Estimators Improve Model Generalization

@article{Narayanaswamy2021LossEI,
title={Loss Estimators Improve Model Generalization},
author={Vivek Sivaraman Narayanaswamy and Jayaraman J. Thiagarajan and Deepta Rajan and Andreas Spanias},
journal={ArXiv},
year={2021},
volume={abs/2103.03788}
}
• Published 5 March 2021
• Computer Science
• ArXiv
With increased interest in adopting AI methods for clinical diagnosis, a vital step towards safe deployment of such tools is to ensure that the models not only produce accurate predictions but also do not generalize to data regimes where the training data provide no meaningful evidence. Existing approaches for ensuring the distribution of model predictions to be similar to that of the true distribution rely on explicit uncertainty estimators that are inherently hard to calibrate. In this paper…

## References

SHOWING 1-10 OF 17 REFERENCES

• Computer Science
AAAI
• 2020
A novel approach for building calibrated estimators is developed that uses separate models for prediction and interval estimation, and poses a bi-level optimization problem that allows the former to leverage estimates from the latter through an \textit{uncertainty matching} strategy.
• Computer Science
ArXiv
• 2020
Despite methods yielding good results on some categories of out-of-distribution samples, they fail to recognize images close to the training distribution and a simple binary classifier on the feature representation has the best accuracy and AUPRC on average.
• Computer Science
ICML
• 2018
This work proposes a simple procedure for calibrating any regression algorithm, and finds that it consistently outputs well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.
• Computer Science
ICML
• 2017
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
• Computer Science
ArXiv
• 2021
This work proposes a principled framework for directly estimating the excess risk by learning a secondary predictor for the generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability, which is particularly interesting in interactive learning environments.
• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
A novel variance-weighted confidence-integrated loss function that is composed of two cross-entropy loss terms with respect to ground-truth and uniform distribution, which are balanced by variance of stochastic prediction scores is designed.
• Medicine, Computer Science
Nature Reviews Cancer
• 2018
A general understanding of AI methods, particularly those pertaining to image-based tasks, is established and how these methods could impact multiple facets of radiology is explored, with a general focus on applications in oncology.
• Computer Science
ICLR
• 2018
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
• 2020
An adaptation in the Gram-Matrix algorithm for out-of-distribution detection that generally performs better and faster than the original algorithm for the considered skin cancer classification task is proposed.
• Medicine
The Journal of investigative dermatology
• 2020