Corpus ID: 237304290

Efficient Out-of-Distribution Detection Using Latent Space of β-VAE for Cyber-Physical Systems

  title={Efficient Out-of-Distribution Detection Using Latent Space of $\beta$-VAE for Cyber-Physical Systems},
  author={Shreyas Ramakrishna and Zahra RahimiNasab and Gabor Karsai and Arvind Easwaran and Abhishek Dubey},
Deep Neural Networks are actively being used in the design of autonomous Cyber-Physical Systems (CPSs). The advantage of these models is their ability to handle high-dimensional state-space and learn compact surrogate representations of the operational state spaces. However, the problem is that the sampled observations used for training the model may never cover the entire state space of the physical environment, and as a result, the system will likely operate in conditions that do not belong… Expand


Real-time Out-of-distribution Detection in Learning-Enabled Cyber-Physical Systems
  • Feiyang Cai, X. Koutsoukos
  • Computer Science, Engineering
  • 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS)
  • 2020
The proposed approach leverages inductive conformal prediction and anomaly detection for developing a method that has a well-calibrated false alarm rate and uses variational autoencoders and deep support vector data description to learn models that can be used efficiently compute the nonconformity of new inputs relative to the training set and enable realtime detection of out-of-distribution high-dimensional inputs. Expand
Learning Confidence for Out-of-Distribution Detection in Neural Networks
This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors. Expand
Improving Reconstruction Autoencoder Out-of-distribution Detection with Mahalanobis Distance
Reconstruction-based approaches fail to capture particular anomalies that lie far from known inlier samples in latent space but near the latent dimension manifold defined by the parameters of the model, so the Mahalanobis distance in latentspace is proposed to better capture these out-of-distribution samples. Expand
Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of β-VAE
Results show the latent space of $\beta$ - VAE is sensitive to encode changes in the values of the generative factor, and that can be used for quick and computationally inexpensive detection on the nuScenes dataset. Expand
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
DeepXplore efficiently finds thousands of incorrect corner case behaviors in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. Expand
q-Space Novelty Detection with Variational Autoencoders
This work proposes novelty detection methods based on training variational autoencoders (VAEs) on normal data to magnetic resonance imaging, namely to the detection of diffusion-space (q-space) abnormalities in diffusion MRI scans of multiple sclerosis patients, and shows that many of them are able to outperform the state of the art. Expand
Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers
An uncertainty estimation algorithm is developed that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification). Expand
Attribution-Based Confidence Metric For Deep Neural Networks
A novel confidence metric, namely, attribution-based confidence (ABC) for deep neural networks (DNNs), which characterizes whether the output of a DNN on an input can be trusted and demonstrates the effectiveness of the ABC metric to make DNNs more trustworthy and resilient. Expand
DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars
DeepTest is a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes and systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. Expand
Safe Visual Navigation via Deep Learning and Novelty Detection
This work uses an autoencoder to recognize when a query is novel, and revert to a safe prior behavior, and can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. Expand