Safe Visual Navigation via Deep Learning and Novelty Detection

@inproceedings{Richter2017SafeVN,
  title={Safe Visual Navigation via Deep Learning and Novelty Detection},
  author={Charles Richter and Nicholas Roy},
  booktitle={Robotics: Science and Systems},
  year={2017}
}
Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when… 
Self-Supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
TLDR
A generalized computation graph is proposed that subsumes value-based model-free methods and model-based methods, and is instantiate to form a navigation model that learns from raw images and is sample efficient, and outperforms single-step and double-step double Q-learning.
Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
TLDR
This paper proposes a method for uncertainty-aware robotic perception that combines generative modeling and model uncertainty, and estimates an uncertainty measure about the model’s prediction, taking into account an explicit generative model of the observation distribution to handle out-of-distribution inputs.
Safe Reinforcement Learning With Model Uncertainty Estimates
TLDR
MC-Dropout and Bootstrapping are used to give computationally tractable and parallelizable uncertainty estimates and are embedded in a Safe Reinforcement Learning framework to form uncertainty-aware navigation around pedestrians, resulting in a collision avoidance policy that knows what it does not know and cautiously avoids pedestrians that exhibit unseen behavior.
Task-Aware Novelty Detection for Visual-based Deep Learning in Autonomous Systems
TLDR
This paper proposes a learning framework that leverages information learned by the prediction model in a task-aware manner to detect novel scenarios and finds that the method is able to systematically detect novel inputs and quantify the deviation from the target prediction through this task- aware approach.
Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation
TLDR
This work shows that a simulated robotic car and a real-world RC car can gather data and train fully autonomously without any human-provided labels beyond those needed to train the detectors, and then at test-time be able to accomplish a variety of different tasks.
Bayesian Optimization Meets Laplace Approximation for Robotic Introspection
TLDR
This paper introduces a scalable Laplace Approximation technique to make Deep Neural Networks (DNNs) more introspective, i.e. to enable them to provide accurate assessments of their failure probability for unseen test data.
Novelty Detection via Network Saliency in Visual-Based Deep Learning
TLDR
This paper proposes a multi-step framework for the detection of novel scenarios in vision-based autonomous systems by leveraging information learned by the trained prediction model and a new image similarity metric.
Learning to be Safe: Deep RL with a Safety Critic
TLDR
This work proposes to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors when learning new, modified tasks, and empirically studies this form of safety-constrained transfer learning in three challenging domains.
BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
TLDR
The reinforcement learning approach, which the authors call BADGR, is an end-to-end learning-based mobile robot navigation system that can be trained with autonomously-labeled off-policy data gathered in real-world environments, without any simulation or human supervision.
Learning for Robot Decision Making under Distribution Shift: A Survey
TLDR
A taxonomy of existing literature to aid or improve decision making under distribution shift for robotic systems is presented and a survey of existing approaches in the area based on this taxonomy is presented.
...
...

References

SHOWING 1-10 OF 32 REFERENCES
Autonomous navigation in unknown environments using machine learning
TLDR
A model of collision probability is developed to develop a model of future measurement utility, efficiently enabling information-gathering behaviors that can extend the robot's visibility far into unknown regions of the environment, thereby lengthening the perceptual horizon, resulting in faster navigation even under conventional safety constraints.
(CAD)$^2$RL: Real Single-Image Flight without a Single Real Image
TLDR
This paper proposes a learning method that they call CAD$^2$RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models, and shows that it can train a policy that generalizes to thereal world, without requiring the simulator to be particularly realistic or high-fidelity.
Introspective classification for robot perception
TLDR
It is proposed that a key ingredient for introspection is a framework’s potential to increase its uncertainty with the distance between a test datum its training data, and shown that better introspection leads to improved decision making in the context of tasks such as autonomous driving or semantic map generation.
Bayesian Learning for Safe High-Speed Navigation in Unknown Environments
TLDR
By using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, a planner for high-speed navigation in unknown environments seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data.
Learning long-range vision for autonomous off-road driving
TLDR
This work presents a self-supervised learning process for long-range vision that is able to accurately classify complex terrain at distances up to the horizon, thus allowing superior strategic planning.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving
TLDR
This paper proposes to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving and argues that the direct perception representation provides the right level of abstraction.
Anytime online novelty and change detection for mobile robots
TLDR
An anytime novelty detection algorithm that deals with noisy and redundant high‐dimensional feature spaces that are common in robotics by utilizing prior class information within the training set and an online scene segmentation algorithm that improves accuracy across diverse environments is presented.
End to End Learning for Self-Driving Cars
TLDR
A convolutional neural network is trained to map raw pixels from a single front-facing camera directly to steering commands and it is argued that this will eventually lead to better performance and smaller systems.
...
...