Safe Robot Navigation Via Multi-Modal Anomaly Detection

@article{Wellhausen2020SafeRN,
  title={Safe Robot Navigation Via Multi-Modal Anomaly Detection},
  author={Lorenz Wellhausen and Ren{\'e} Ranftl and Marco Hutter},
  journal={IEEE Robotics and Automation Letters},
  year={2020},
  volume={5},
  pages={1326-1333}
}
Navigation in natural outdoor environments requires a robust and reliable traversability classification method to handle the plethora of situations a robot can encounter. Binary classification algorithms perform well in their native domain but tend to provide overconfident predictions when presented with out-of-distribution samples, which can lead to catastrophic failure when navigating unknown environments. We propose to overcome this issue by using anomaly detection on multi-modal images for… 

Figures and Tables from this paper

An Anomaly Detection Approach to Monitor the Structured-Based Navigation in Agricultural Robotics

This paper presents a data-driven monitoring approach for the task of structure-based navigation in agriculture, applying semi-supervised anomaly detection to learn a model of normal scene geometry that characterizes a domain of reliable execution of the considered task.

Effective Free-Driving Region Detection for Mobile Robots by Uncertainty Estimation Using RGB-D Data

The Automatic Generating Segmentation Label (AGSL) framework is proposed, which is an efficient system automatically generating segmentation labels for drivable areas and road anomalies by finding dissimilarities between the input and resynthesized image and localizing obstacles in the disparity map.

An Outlier Exposure Approach to Improve Visual Anomaly Detection Performance for Mobile Robots

This work considers the problem of building visual anomaly detection systems for mobile robots, and shows that exposing even a small number of anomalous frames yields significant performance improvements in Real-NVP anomaly detection models.

Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments

A deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments, that leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks.

Self-Supervised Traversability Prediction by Learning to Reconstruct Safe Terrain

This work develops a tool that projects the vehicle trajectory into the front camera image, and develops an autoencoder trained on masked vehicle trajectory regions that identifies low- and high-risk terrains based on the reconstruction error.

WayFAST: Navigation With Predictive Traversability in the Field

We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots that require good traction to navigate. Our algorithm, termed WayFAST (Waypoint Free

WayFAST: Traversability Predictive Navigation for Field Robots

This work presents a self-supervised approach for learning to predict traversable paths for wheeled mobile robots that require good traction to navigate, and shows that the training pipeline based on online traction estimates is more data-efficient than other heuristic-based methods.

Using Visual Anomaly Detection for Task Execution Monitoring

This work learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion, using a probabilistic U-Net architecture to predict optical flow and a robot’s kinematics and 3D model to model camera and body motion.

ScaTE: A Scalable Framework for Self-Supervised Traversability Estimation in Unstructured Environments

This work introduces a scalable framework for learning self-supervised traversability, which can learn the traversability directly from vehicle-terrain interaction without any human supervision and demonstrates that estimated traversability results inctive navigation that enables distinct maneuvers based on the driving characteristics of the vehicles.

Overleaf Example

We present TerraPN, a novel method to learn the surface characteristics (texture, bumpiness, deformability, etc.) of complex outdoor terrains for autonomous robot navigation. Our method predicts

References

SHOWING 1-10 OF 54 REFERENCES

Traversability classification using unsupervised on-line visual learning for outdoor robot navigation

A novel on-line learning method which can make accurate predictions of the traversability properties of complex terrain based on autonomous training data collection which exploits the robot's experience in navigating its environment to train classifiers without human intervention.

Safe Visual Navigation via Deep Learning and Novelty Detection

This work uses an autoencoder to recognize when a query is novel, and revert to a safe prior behavior, and can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training.

AdapNet: Adaptive semantic segmentation in adverse environmental conditions

This paper proposes a novel semantic segmentation architecture and the convoluted mixture of deep experts (CMoDE) fusion technique that enables a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space.

Image Classification for Ground Traversability Estimation in Robotics

This work builds a convolutional neural network that predicts whether the robot will be able to traverse such patch from bottom to top of terrain, and quantitatively validate the approach on real-elevation datasets.

Where Should I Walk? Predicting Terrain Properties From Images Via Self-Supervised Learning

This letter proposes a method to collect data from robot-terrain interaction and associate it to images, and shows that data collected can be used to train a convolutional network for terrain property prediction as well as weakly supervised semantic segmentation.

Find your own way: Weakly-supervised segmentation of path proposals for urban autonomy

A weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments is presented and it is illustrated how the method can generalise to multiple path proposals at intersections.

Autonomous Terrain Classification With Co- and Self-Training Approach

The proposed approach was validated with a four-wheeled test rover in Mars-analogous terrain, including bedrock, soil, and sand, and successfully estimated terrain types with 82% accuracy with only three labeled images.

Scene understanding for a high-mobility walking robot

This paper describes the development and experimental evaluation of a terrain classification and ground surface height estimation system to support autonomous navigation for a high-mobility walking robot and provides experimental evaluation on an extensive, manually-labeled dataset collected from geographically diverse sites over a 28-month period.

Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving

Fishyscapes is presented, the first public benchmark for uncertainty estimation in the real-world task of semantic segmentation for urban driving and shows that anomaly detection is far from solved even for ordinary situations, while the benchmark allows measuring advancements beyond the state of the art.

GONet: A Semi-Supervised Deep Learning Approach For Traversability Estimation

Through extensive experiments and several demonstrations, it is shown that the proposed traversability estimation approaches are robust and can generalize to unseen scenarios and are memory efficient and fast, allowing for real-time operation on a mobile robot with single or stereo fisheye cameras.
...