• Corpus ID: 232134901

Loss Estimators Improve Model Generalization

@article{Narayanaswamy2021LossEI,
  title={Loss Estimators Improve Model Generalization},
  author={Vivek Sivaraman Narayanaswamy and Jayaraman J. Thiagarajan and Deepta Rajan and Andreas Spanias},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.03788}
}
With increased interest in adopting AI methods for clinical diagnosis, a vital step towards safe deployment of such tools is to ensure that the models not only produce accurate predictions but also do not generalize to data regimes where the training data provide no meaningful evidence. Existing approaches for ensuring the distribution of model predictions to be similar to that of the true distribution rely on explicit uncertainty estimators that are inherently hard to calibrate. In this paper… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 17 REFERENCES

Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors

A novel approach for building calibrated estimators is developed that uses separate models for prediction and interval estimation, and poses a bi-level optimization problem that allows the former to leverage estimates from the latter through an \textit{uncertainty matching} strategy.

A Benchmark of Medical Out of Distribution Detection

Despite methods yielding good results on some categories of out-of-distribution samples, they fail to recognize images close to the training distribution and a simple binary classifier on the feature representation has the best accuracy and AUPRC on average.

Accurate Uncertainties for Deep Learning Using Calibrated Regression

This work proposes a simple procedure for calibrating any regression algorithm, and finds that it consistently outputs well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.

On Calibration of Modern Neural Networks

It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.

DEUP: Direct Epistemic Uncertainty Prediction

This work proposes a principled framework for directly estimating the excess risk by learning a secondary predictor for the generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability, which is particularly interesting in interactive learning environments.

Learning for Single-Shot Confidence Calibration in Deep Neural Networks Through Stochastic Inferences

A novel variance-weighted confidence-integrated loss function that is composed of two cross-entropy loss terms with respect to ground-truth and uniform distribution, which are balanced by variance of stochastic prediction scores is designed.

Artificial intelligence in radiology

A general understanding of AI methods, particularly those pertaining to image-based tasks, is established and how these methods could impact multiple facets of radiology is explored, with a general focus on applications in oncology.

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.

On Out-of-Distribution Detection Algorithms with Deep Neural Skin Cancer Classifiers

An adaptation in the Gram-Matrix algorithm for out-of-distribution detection that generally performs better and faster than the original algorithm for the considered skin cancer classification task is proposed.

Artificial Intelligence in Dermatology: A Primer.