Bayesian Autoencoders for Drift Detection in Industrial Environments

  title={Bayesian Autoencoders for Drift Detection in Industrial Environments},
  author={Bang Xiang Yong and Yasmin Fathy and Alexandra Brintrup},
  journal={2020 IEEE International Workshop on Metrology for Industry 4.0 \& IoT},
Autoencoders are unsupervised models which have been used for detecting anomalies in multi-sensor environments. A typical use includes training a predictive model with data from sensors operating under normal conditions and using the model to detect anomalies. Anomalies can come either from real changes in the environment (real drift) or from faulty sensory devices (virtual drift); however, the use of Autoencoders to distinguish between different anomalies has not yet been considered. To this… 

Figures from this paper

Bayesian autoencoders with uncertainty quantification: Towards trustworthy anomaly detection

Coalitional Bayesian Autoencoders - Towards explainable unsupervised deep learning



Heteroscedastic Calibration of Uncertainty Estimators in Deep Learning

This work proposes to repurpose the heteroscedastic regression objective as a surrogate for calibration and enable any existing uncertainty estimator to be inherently calibrated and eliminates the need for recalibration.

Multi Agent System for Machine Learning Under Uncertainty in Cyber Physical Manufacturing System

This paper determines the sources of uncertainty in machine learning and establishes the success criteria of a machine learning system to function well under uncertainty in a cyber-physical manufacturing system (CPMS) scenario, and proposes a multi-agent system architecture which leverages probabilistic machine learning as a means of achieving such criteria.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Uncertainty in Neural Networks: Bayesian Ensembling

This work proposes one modification to the usual ensembling process that does result in Bayesian behaviour: regularising parameters about values drawn from a prior distribution.

Weight Uncertainty in Neural Network

This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.

Convolutional Autoencoder-Based Sensor Fault Classification

Through simulation it is shown that the proposed convolutional autoencoder-based sensor fault classification scheme can improve classification performance of the sensor faults.

Stochastic Gradient Hamiltonian Monte Carlo

A variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution is introduced.

Probabilistic machine learning and artificial intelligence

This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

A generative neural network model for the quality prediction of work in progress products

Condition monitoring of a complex hydraulic system using multivariate statistics

A systematic approach for the automated training of condition monitoring systems for complex hydraulic systems is developed and evaluated and the classification rate for random load cycles was enhanced by a distribution analysis of feature trends.