Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control

@article{Michelmore2020UncertaintyQW,
  title={Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control},
  author={Rhiannon Michelmore and Matthew Wicker and Luca Laurenti and Luca Cardelli and Yarin Gal and Marta Z. Kwiatkowska},
  journal={2020 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2020},
  pages={7344-7350}
}
Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world. Prior to their widespread adoption, safety guarantees are needed on the controller behaviour that properly take account of the uncertainty within the model as well as sensor noise. Bayesian neural networks, which assume a prior over the weights, have been shown capable of producing such uncertainty measures, but properties… Expand
Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty
TLDR
This paper shows that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for δ ≤ 10−8, which result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems. Expand
Infinite Time Horizon Safety of Bayesian Neural Networks
TLDR
This work trains a separate deterministic neural network that serves as an infinite time horizon safety certificate, and shows that the certificate network guarantees the safety of the system over a subset of the BNN weight posterior’s support. Expand
Trajectory planning under environmental uncertainty with finite-sample safety guarantees
TLDR
This work tackles the problem of trajectory planning in an environment comprised of a set of obstacles with uncertain time-varying locations and provides provable guarantees on satisfaction of the chance-constraints corresponding to the nominal yet unknown moments. Expand
Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications
TLDR
This work introduces a general formulation of probabilistic specifications for neural networks, and shows that an optimal choice of functional multipliers leads to exact verification (i.e., sound and complete verification), and for specific forms of multipliers, develops tractable practical verification algorithms. Expand
Accurate and Reliable Forecasting using Stochastic Differential Equations
TLDR
SDE-HNN is a new heteroscedastic neural network equipped with stochastic differential equations (SDE) to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression, and significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification. Expand
Driving Maneuvers Prediction Based Autonomous Driving Control by Deep Monte Carlo Tree Search
TLDR
A deep Monte Carlo Tree Search control method for vision-based autonomous driving that achieves high control stability by avoiding sharp turns and driving deviations and achieves a significant improvement in training efficiency, the stability of steering control, and stability of driving trajectory compared to existing methods. Expand
A Bayesian Deep Neural Network for Safe Visual Servoing in Human–Robot Interaction
TLDR
This study describes a system that can avoid collision with human hands while the robot is executing an image-based visual servoing (IBVS) task and uses Monte Carlo dropout to transform a deep neural network to a Bayesian DNN, and learns the repulsive position for hand avoidance. Expand
Uncertainty Evaluation of Object Detection Algorithms for Autonomous Vehicles
  • Liang Peng, Hong Wang, Jun Li
  • Computer Science
  • Automotive Innovation
  • 2021
TLDR
A framework based on the advanced You Only Look Once (YOLO) algorithm and the mean Average Precision (mAP) method to evaluate the object detection performance of the camera under SOTIF-related scenarios proves the feasibility and effectiveness of the proposed uncertainty acquisition approach for object detection algorithm. Expand
Reliability Analysis of Artificial Intelligence Systems Using Recurrent Events Data from Autonomous Vehicles
TLDR
This paper uses recurrent disengagement events as a representation of the reliability of the AI system in AV, and proposes a statistical framework for modeling and analyzing the recurrent events data from AV driving tests and develops inference procedures for selecting the best models, quantifying uncertainty, and testing heterogeneity in the event process. Expand
Gradient-Free Adversarial Attacks for Bayesian Neural Networks
TLDR
It is shown that for various approximate Bayesian inference methods the usage of gradient-free algorithms can greatly improve the rate of finding adversarial examples compared to state-of-the-art gradient-based methods. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 44 REFERENCES
Robustness Guarantees for Bayesian Inference with Gaussian Processes
TLDR
A robustness measure for Bayesian inference against input perturbations is defined, given by the probability that, for a test point and a compact set in the input space containing the test point, the prediction of the learning model will remain δ−close for all the points in the set, for δ > 0. Expand
Ensemble Bayesian Decision Making with Redundant Deep Perceptual Control Policies
TLDR
This work presents a novel ensemble of Bayesian Neural Networks for control of safety-critical systems and combines the knowledge of prediction uncertainty obtained from BNNs and ensemble control for a redundant control methodology applied to an agile autonomous driving task. Expand
Variational End-to-End Navigation and Localization
TLDR
This paper defines a novel variational network capable of learning from raw camera data of the environment as well as higher level roadmaps to predict a full probability distribution over the possible control commands and formulate how this model can be used to localize the robot according to correspondences between the map and the observed visual road topology. Expand
Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning
TLDR
This work investigates three under-explored themes for AV research: safety, interpretability, and compliance, and highlights the need for concrete evaluation metrics, propose example problems, and highlight possible solutions. Expand
Uncertainty-Aware Driver Trajectory Prediction at Urban Intersections
TLDR
A variational neural network approach that predicts future driver trajectory distributions for the vehicle based on multiple sensors that improves the prediction error of a physics-based model by 25% while successfully identifying the uncertain cases with 82% accuracy is proposed. Expand
Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection
TLDR
This work presents practical methods to capture uncertainties in a 3D vehicle detector for Lidar point clouds and shows that the epistemic uncertainty is related to the detection accuracy, whereas the aleatoric uncertainty is influenced by vehicle distance and occlusion. Expand
Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car
TLDR
A method for determining which elements in the road image most influence PilotNet's steering decision is developed, and results show that PilotNet indeed learns to recognize relevant objects on the road. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
Uncertainty-Aware Reinforcement Learning for Collision Avoidance
TLDR
An uncertainty-aware model-based learning algorithm that estimates the probability of collision together with a statistical estimate of uncertainty is presented, and it is shown that the algorithm naturally chooses to proceed cautiously in unfamiliar environments, and increases the velocity of the robot in settings where it has high confidence. Expand
Concrete Dropout
TLDR
This work proposes a new dropout variant which gives improved performance and better calibrated uncertainties, and uses a continuous relaxation of dropout’s discrete masks to allow for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. Expand
...
1
2
3
4
5
...