Two-stage Modeling for Prediction with Confidence

@article{Chen2022TwostageMF,
  title={Two-stage Modeling for Prediction with Confidence},
  author={Dangxing Chen},
  journal={2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)},
  year={2022},
  pages={1-5}
}
  • Dangxing Chen
  • Published 19 September 2022
  • Computer Science
  • 2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)
The use of neural networks has been very successful in a wide variety of applications. However, it has recently been observed that it is difficult to generalize the performance of neural networks under the condition of distributional shift. Several efforts have been made to identify potential out-of-distribution inputs. Although existing literature has made significant progress with regard to images and textual data, finance has been overlooked. The aim of this paper is to investigate the… 

Figures and Tables from this paper

Towards Consistent Predictive Confidence through Fitted Ensembles

This paper introduces separable concept learning framework to realistically measure the performance of classifiers in presence of OOD examples and presents a new strong baseline for more consistent predictive confidence in deep models, called fitted ensembles.

Sensitivity based Neural Networks Explanations

A way to assess the relative input features importance of a neural network based on the sensitivity of the model output with respect to its input is presented and implemented into an open-source Python package that allows its users to easily generate and visualize explanations for their neural networks.

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods.

Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection

This work derives an explicit expression of the pNML and its generalization error, denoted as the {\em regret}, for a single layer neural network (NN) and describes how to efficiently apply the derived pN ML regret to any pretrained deep NN, by employing the explicit pNNML for the last layer, followed by the softmax function.

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks

It is demonstrated that TreeNets can improve ensemble performance and that diverse ensembles can be trained end-to-end under a unified loss, achieving significantly higher "oracle" accuracies than classical ensembled.

Recent Advances in Open Set Recognition: A Survey

This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, evaluation criteria, and algorithm comparisons to highlight the limitations of existing approaches and point out some promising subsequent research directions.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.