Learn-By-Calibrating: Using Calibration As A Training Objective

@article{Thiagarajan2020LearnByCalibratingUC,
  title={Learn-By-Calibrating: Using Calibration As A Training Objective},
  author={Jayaraman J. Thiagarajan and Bindya Venkatesh and Deepta Rajan},
  journal={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2020},
  pages={3632-3636}
}
Calibration error is commonly adopted for evaluating the quality of uncertainty estimators in deep neural networks. In this paper, we argue that such a metric is highly beneficial for training predictive models, even when we do not explicitly measure the uncertainties. This is conceptually similar to heteroscedastic neural networks that produce variance estimates for each prediction, with the key difference that we do not place a Gaussian prior on the predictions. We propose a novel algorithm… 
Training Calibration-based Counterfactual Explainers for Deep Learning Models in Medical Image Analysis
TLDR
TraCE (Training Calibration-based Explainers), a counterfactual generation approach for deep models in medical image analysis, which utilizes pre-trained generative models and a novel uncertainty-based interval calibration strategy for synthesizing hypothesis-driven explanations, is presented.
Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models
TLDR
This paper argues that these two objectives of characterizing model reliability and enabling rigorous introspection of model behavior are not necessarily disparate and proposes to utilize prediction calibration to meet both objectives.
Designing accurate emulators for scientific processes using calibration-driven deep models
TLDR
This work proposes Learn-by-Calibrating, a novel deep learning approach based on interval calibration for designing emulators that can effectively recover the inherent noise structure without any explicit priors, and demonstrates the efficacy of this approach in providing high-quality emulators, when compared to widely-adopted loss function choices, even in small-data regimes.

References

SHOWING 1-10 OF 15 REFERENCES
Accurate Uncertainties for Deep Learning Using Calibrated Regression
TLDR
This work proposes a simple procedure for calibrating any regression algorithm, and finds that it consistently outputs well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing
In Hezaveh et al. 2017 we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing
Concrete Dropout
TLDR
This work proposes a new dropout variant which gives improved performance and better calibrated uncertainties, and uses a continuous relaxation of dropout’s discrete masks to allow for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Practical Confidence and Prediction Intervals
TLDR
This work proposes a new method to compute prediction intervals that is better than existing methods with regard to extrapolation and interpolation in data regimes with a limited amount of data, and yields prediction intervals which actual confidence levels are closer to the desired confidence levels.
Leveraging uncertainty information from deep neural networks for disease detection
TLDR
Drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images are evaluated and it is shown that it captures uncertainty better than straightforward alternatives and that uncertainty informed decision referral can improve diagnostic performance.
Uncertainty in Deep Learning
TLDR
This work develops tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation, and develops the theory for such tools.
Greedy function approximation: A gradient boosting machine.
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions
...
1
2
...