Corpus ID: 204401816

Addressing Failure Prediction by Learning Model Confidence

@inproceedings{Corbire2019AddressingFP,
  title={Addressing Failure Prediction by Learning Model Confidence},
  author={Charles Corbi{\`e}re and Nicolas Thome and Avner Bar-Hen and Matthieu Cord and Patrick P{\'e}rez},
  booktitle={NeurIPS},
  year={2019}
}
Assessing reliably the confidence of a deep neural net and predicting its failures is of primary importance for the practical deployment of these models. In this paper, we propose a new target criterion for model confidence, corresponding to the True Class Probability (TCP). We show how using the TCP is more suited than relying on the classic Maximum Class Probability (MCP). We provide in addition theoretical guarantees for TCP in the context of failure prediction. Since the true class is by… Expand
Confidence Estimation via Auxiliary Models
TLDR
A novel target criterion for model confidence, namely the true class probability (TCP), is introduced and it is shown that TCP offers better properties for confidence estimation than standard maximum class probabilities (MCP). Expand
Failure Prediction by Confidence Estimation of Uncertainty-Aware Dirichlet Networks
  • Theodoros Tsiligkaridis
  • Computer Science
  • ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
TLDR
It is shown that uncertainty-aware deep Dirichlet neural networks provide an improved separation between the confidence of correct and incorrect predictions in the true class probability (TCP) metric. Expand
Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model
TLDR
A new framework is presented that produces more reliable confidence scores for detecting misclassification errors and calibrates the classifier's inherent confidence indicators and estimates uncertainty of the calibrated confidence scores using Gaussian Processes. Expand
On the Dark Side of Calibration for Modern Neural Networks
TLDR
It is found that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement, and this calibrationrefinement trade-off holds for the majority of calibration methods. Expand
On Deep Neural Network Calibration by Regularization and its Impact on Refinement
TLDR
This paper presents a theoretically and empirically supported exposition reviewing refinement of a calibrated model, and finds that many calibration approaches with the likes of label smoothing, mixup etc. lower the usefulness of a DNN by degrading its refinement. Expand
One Versus all for deep Neural Network Incertitude (OVNNI) quantification
TLDR
This work proposes a new technique to quantify the epistemic uncertainty of data easily, mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. Expand
Identifying Incorrect Classifications with Balanced Uncertainty
  • Bolian Li, Zige Zheng, Changqing Zhang
  • Computer Science
  • ArXiv
  • 2021
TLDR
The distributional imbalance is proposed to model the imbalance in uncertainty estimation as two kinds of distribution biases, and the Balanced True Class Probability framework is proposed, which learns an uncertainty estimator with a novel Distributional Focal Loss (DFL) objective. Expand
Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles
TLDR
A principled and practically effective framework that simultaneously addresses unsupervised accuracy estimation and error detection and iteratively learns an ensemble of models to identify mis-classified data points and performs self-training to improve the ensemble with the identified points. Expand
SLURP: Side Learning Uncertainty for Regression Problems
TLDR
This work proposes SLURP, a generic approach for regression uncertainty estimation via a side learner that exploits the output and the intermediate representations generated by the main task model and has a low computational cost with respect to existing solutions. Expand
TRADI: Tracking deep neural network weight distributions
TLDR
This work introduces a method for tracking the trajectory of the weights during optimization, that does not require any changes in the architecture nor on the training procedure, and achieves competitive results, while preserving computational efficiency in comparison to other popular approaches. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
On Calibration of Modern Neural Networks
TLDR
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions. Expand
Selective Classification for Deep Neural Networks
TLDR
A method to construct a selective classifier given a trained neural network, which allows a user to set a desired risk level and the classifier rejects instances as needed, to grant the desired risk (with high probability). Expand
Learning Confidence for Out-of-Distribution Detection in Neural Networks
TLDR
This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors. Expand
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets. Expand
Relaxed Softmax: Efficient Confidence Auto-Calibration for Safe Pedestrian Detection
TLDR
This paper investigates the problem of learning in an end-to-end manner object detectors that are accurate while providing an unbiased estimate of the reliablity of their own predictions by proposing a modification of the standard softmax layer where a probabilistic confidence score is explicitly pre-multiplied into the incoming activations to modulate confidence. Expand
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin. Expand
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks. Expand
Unsupervised Domain Adaptation via Calibrating Uncertainties
TLDR
This work proposes a general Renyi entropy regularization framework and employs variational Bayes learning for reliable uncertainty estimation and calibrating the sample variance of network parameters serves as a plug-in regularizer for training. Expand
...
1
2
3
4
5
...