• Corpus ID: 195346668

Decomposition of Uncertainty for Active Learning and Reliable Reinforcement Learning in Stochastic Systems

  title={Decomposition of Uncertainty for Active Learning and Reliable Reinforcement Learning in Stochastic Systems},
  author={Stefan Depeweg and Jos{\'e} Miguel Hern{\'a}ndez-Lobato and Finale Doshi-Velez and Steffen Udluft},
Bayesian neural networks (BNNs) with latent variables are probabilistic models which can automatically identify complex stochastic patterns in the data. We study in these models a decomposition of predictive uncertainty into its epistemic and aleatoric components. We show how such a decomposition arises naturally in a Bayesian active learning scenario and develop a new objective for reliable reinforcement learning (RL) with an epistemic and aleatoric risk element. Our experiments illustrate the… 

Scaling Active Inference

This work presents a working implementation of active inference that applies to high-dimensional tasks, with proof-of-principle results demonstrating efficient exploration and an order of magnitude increase in sample efficiency over strong model-free baselines.

Impact of Parameter Sparsity on Stochastic Gradient MCMC Methods for Bayesian Deep Learning

This paper uses stochastic gradient MCMC methods as the core Bayesian inference method and considers a variety of approaches for selecting sparse network structures, showing that certain classes of randomly selected substructure can perform as well as substructures derived from state-of-the-art iterative pruning methods while drastically reducing model training times.

Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are Conditional Entropy and Mutual Information Appropriate Measures?

This short note is a critical discussion of the quantification of aleatoric and epistemic uncertainty in terms of conditional entropy and mutual information, respectively, which has recently been

Sensitivity analysis for predictive uncertainty

This work derives a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty using Bayesian neural networks with latent variables as a model class and increases the interpretability of complex black-box probabilistic models.

Predictive Uncertainty Estimation via Prior Networks

This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty by parameterizing a prior distribution over predictive distributions and evaluates PNs on the tasks of identifying out-of-distribution samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods.

URSABench: Comprehensive Benchmarking of Approximate Bayesian Inference Methods for Deep Neural Networks

Initial work is described on the development ofURSABench, an open-source suite of bench-marking tools for comprehensive assessment of approximate Bayesian inference methods with a focus on deep learning-based classification tasks.

BaCOUn: Bayesian Classifers with Out-of-Distribution Uncertainty

This work proposes a Bayesian framework to obtain reliable uncertainty estimates for deep classifiers by consisting of a plug-in "generator" used to augment the data with an additional class of points that lie on the boundary of the training data, followed by Bayesian inference on top of features that are trained to distinguish these "out-of-distribution" points.

Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks

This paper investigates several aspects of this framework including the impact of uncertainty and the choice of student model architecture, and evaluates down-stream tasks leveraging entropy distillation including uncertainty ranking and out-of-distribution detection.

Uncertainty in Gradient Boosting via Ensembles

Experiments on a range of regression and classification datasets show that ensembles of gradient boosting models yield improved predictive performance, and measures of uncertainty successfully enable detection of out-of-domain inputs.

Kalman meets Bellman: Improving Policy Evaluation through Value Tracking

An optimization method, called Kalman Optimization for Value Approximation (KOVA) that can be incorporated as a policy evaluation component in policy optimization algorithms and analyzed, which minimizes a regularized objective function that concerns both parameter and noisy return uncertainties.



Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks

An algorithm for model-based reinforcement learning that combines Bayesian neural networks (BNNs) with random roll-outs and stochastic optimization for policy learning and achieves promising results in a real-world scenario for controlling a gas turbine.

Model based Bayesian Exploration

This paper explicitly represents uncertainty about the parameters of the model and build probability distributions over Q-values based on these that are used to compute a myopic approximation to the value of information for each action and hence to select the action that best balances exploration and exploitation.

Efficient Uncertainty Propagation for Reinforcement Learning with Limited Data

This paper presents a method to incorporate the estimator's uncertainties and propagate them to the conclusions by being only approximate, which considerably increases the robustness of the derived policies compared to the standard approach.

Weight Uncertainty in Neural Networks

This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.

Improving PILCO with Bayesian Neural Network Dynamics Models

PILCO’s framework is extended to use Bayesian deep dynamics models with approximate variational inference, allowing PILCO to scale linearly with number of trials and observation space dimensionality, and it is shown that moment matching is a crucial simplifying assumption made by the model.

Risk-Sensitive Reinforcement Learning

A risk-sensitive Q-learning algorithm is derived, which is necessary for modeling human behavior when transition probabilities are unknown, and applied to quantify human behavior in a sequential investment task and is found to provide a significantly better fit to the behavioral data and leads to an interpretation of the subject's responses that is indeed consistent with prospect theory.

Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning

A robust method to learn multimodal transitions using function approximation, which is a key preliminary for model-based RL in stochastic domains, is shown.

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.

Bayesian learning for data-efficient control

This thesis uses probabilistic Bayesian modelling to learn systems from scratch, similar to the PILCO algorithm, and takes a step towards data efficient learning of high-dimensional control using Bayesian neural networks (BNN).

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.