Multidimensional Uncertainty-Aware Evidential Neural Networks
@article{Hu2020MultidimensionalUE, title={Multidimensional Uncertainty-Aware Evidential Neural Networks}, author={Yibo Hu and Yuzhe Ou and Xujiang Zhao and Jin-Hee Cho and Feng Chen}, journal={ArXiv}, year={2020}, volume={abs/2012.13676} }
Traditional deep neural networks (NNs) have significantly contributed to the state-of-the-art performance in the task of classification under various application domains. However, NNs have not considered inherent uncertainty in data associated with the class probabilities where misclassification under uncertainty may easily introduce high risk in decision making in real-world contexts (e.g., misclassification of objects in roads leads to serious accidents). Unlike Bayesian NN that indirectly…
5 Citations
Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation
- Computer Science
- 2021
This comprehensive and extensive survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning, which allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions.
Uncertainty-Aware Reliable Text Classification
- Computer ScienceKDD
- 2021
This paper proposes an inexpensive framework that adopts both auxiliary outliers and pseudo off-manifold samples to train the model with prior knowledge of a certain class, which has high vacuity for OOD samples, and demonstrates that the model based on evidential uncertainty outperforms other counterparts for detecting OOD examples.
ConfliBERT: A Pre-trained Language Model for Political Conflict and Violence
- Computer ScienceNAACL
- 2022
Results consistently show that ConfliBERT outperforms BERT when analyzing political violence and conflict.
A Survey on Uncertainty Reasoning and Quantification for Decision Making: Belief Theory Meets Deep Learning
- Computer ScienceArXiv
- 2022
This survey paper discusses several popular belief theories and their core ideas dealing with uncertainty causes and types and quantifying them, along with the discussions of their applicability in ML/DL and three main approaches that leverage belief theories in Deep Neural Networks.
Seed: Sound Event Early Detection Via Evidential Uncertainty
- Computer ScienceICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2022
A novel Polyphonic Evidential Neural Network is proposed to model the evidential uncertainty of the class probability with Beta distribution to improve the event detection performance and design the backtrack inference method that utilizes both the forward and backward audio features of an ongoing event.
References
SHOWING 1-10 OF 35 REFERENCES
Uncertainty-Aware Deep Classifiers Using Generative Models
- Computer ScienceAAAI
- 2020
A novel neural network model is developed that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space and is incorporated into variational autoencoders and generative adversarial networks for training.
Towards neural networks that provably know when they don't know
- Computer ScienceICLR
- 2020
This paper proposes a new approach to OOD which overcomes both problems and can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates forLow confidence predictions in a neighborhood of an out-distribution point.
Evidential Deep Learning to Quantify Classification Uncertainty
- Computer ScienceNeurIPS
- 2018
This work treats predictions of a neural net as subjective opinions and learns the function that collects the evidence leading to these opinions by a deterministic neural net from data, which achieves unprecedented success on detection of out-of-distribution queries and endurance against adversarial perturbations.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
- Computer ScienceICML
- 2016
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Gradient-based learning applied to document recognition
- Computer ScienceProc. IEEE
- 1998
This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.
Reading Digits in Natural Images with Unsupervised Feature Learning
- Computer Science
- 2011
A new benchmark dataset for research use is introduced containing over 600,000 labeled digits cropped from Street View images, and variants of two recently proposed unsupervised feature learning methods are employed, finding that they are convincingly superior on benchmarks.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
- Computer ScienceICLR
- 2018
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
Implicit Weight Uncertainty in Neural Networks
- Computer ScienceArXiv
- 2017
This work introduces Bayes by Hypernet (BbH), a new method of variational approximation that interprets hypernetworks as implicit distributions and achieves competitive accuracies and predictive uncertainties on MNIST and a CIFAR5 task, while being the most robust against adversarial attacks.
Deep Residual Learning for Image Recognition
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Weight Uncertainty in Neural Networks
- Computer ScienceArXiv
- 2015
This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.