• Corpus ID: 227151564

Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs

@article{Wang2021UncertaintyEA,
  title={Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs},
  author={Cheng Wang and Carolin (Haas) Lawrence and Mathias Niepert},
  journal={ArXiv},
  year={2021},
  volume={abs/2011.12010}
}
Uncertainty quantification is crucial for building reliable and trustable machine learning systems. We propose to estimate uncertainty in recurrent neural networks (RNNs) via stochastic discrete state transitions over recurrent timesteps. The uncertainty of the model can be quantified by running a prediction several times, each time sampling from the recurrent state transition distribution, leading to potentially different results if the model is uncertain. Alongside uncertainty quantification… 
Transformer Uncertainty Estimation with Hierarchical Stochastic Attention
TLDR
This work proposes a novel way to enable transformers to have the capability of uncertainty estimation and retain the original predictive performance by learning a hierarchical stochastic self-attention that attends to values and a set of learnable centroids.
Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?
TLDR
Predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks; approximate Bayesian methods are promising to calibrate and generalize malware detectors to deal with dataset shift; but can not cope with adversary evasion attacks.
Handling Long-Tail Queries with Slice-Aware Conversational Systems
TLDR
This paper explores the recent concept of slice-based learning (SBL) (Chen et al., 2019) to improve the baseline conversational skill routing system on the tail yet critical query traffic and shows that the slice-aware model is beneficial in improving model performance for the tail intents while maintaining the overall performance.

References

SHOWING 1-10 OF 65 REFERENCES
Accurate Uncertainties for Deep Learning Using Calibrated Regression
TLDR
This work proposes a simple procedure for calibrating any regression algorithm, and finds that it consistently outputs well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.
Calibrated Model-Based Deep Reinforcement Learning
TLDR
A simple way to augment any model-based reinforcement learning agent with a calibrated model is described and it is suggested that doing so consistently improves planning, sample complexity, and exploration and can improve the performance of model- based reinforcement learning with minimal computational and implementation overhead.
Probabilistic Recurrent State-Space Models
TLDR
This work proposes a novel model formulation and a scalable training algorithm based on doubly stochastic variational inference and Gaussian processes that allows one to fully capture the latent state temporal correlations in state-space models.
Sampling-free Uncertainty Estimation in Gated Recurrent Units with Applications to Normative Modeling in Neuroimaging
TLDR
This work shows how classical ideas in the literature on exponential families on probabilistic networks provide an excellent starting point to derive uncertainty estimates in Gated Recurrent Units (GRU) without the need for costly sampling-based estimation.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
Learning Latent Dynamics for Planning from Pixels
TLDR
The Deep Planning Network (PlaNet) is proposed, a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space using a latent dynamics model with both deterministic and stochastic transition components.
Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
TLDR
Using the decomposition of uncertainty in aleatoric and epistemic components for decision making purposes, a novel risk-sensitive criterion for reinforcement learning is defined to identify policies that balance expected cost, model-bias and noise aversion.
Co-evolving recurrent neurons learn deep memory POMDPs
TLDR
A new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons is introduced.
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
TLDR
This work proposes a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations and uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions.
Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems
TLDR
This work develops a model class and Bayesian inference algorithms that not only discover these dynamical units but also, by learning how transition probabilities depend on observations or continuous latent states, explain their switching behavior.
...
1
2
3
4
5
...