Corpus ID: 36663713

Variational Autoencoder based Anomaly Detection using Reconstruction Probability

@inproceedings{An2015VariationalAB,
  title={Variational Autoencoder based Anomaly Detection using Reconstruction Probability},
  author={Jinwon An and Sungzoon Cho},
  year={2015}
}
We propose an anomaly detection method using the reconstruction probability from the variational autoencoder. The reconstruction probability is a probabilistic measure that takes into account the variability of the distribution of variables. The reconstruction probability has a theoretical background making it a more principled and objective anomaly score than the reconstruction error, which is used by autoencoder and principal components based anomaly detection methods. Experimental results… Expand
Interpreting Rate-Distortion of Variational Autoencoder and Using Model Uncertainty for Anomaly Detection
TLDR
This work revisits VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error, and incorporates a practical model uncertainty measure into the metric to enhance the effectiveness of detecting anomalies. Expand
Estimation of Dimensions Contributing to Detected Anomalies with Variational Autoencoders
TLDR
This paper proposes a novel algorithm for estimating the dimensions contributing to the detected anomalies by using variational autoencoders (VAEs), based on an approximative probabilistic model that considers the existence of anomalies in the data, and by maximizing the log-likelihood estimates which dimensions contribute to determining data as an anomaly. Expand
Improved Variational Autoencoder Anomaly Detection in Time Series Data
TLDR
This paper proposes a novel approach to anomaly detection based on the Variational Autoencoder method with a Mish activation function and a Negative Log-Likelihood loss function, and shows that the proposed method offers an improvement over existing methods. Expand
Inverse-Transform AutoEncoder for Anomaly Detection
TLDR
A selected set of transformations based on human priors is used to erase certain targeted information from input data using an inverse-transform autoencoder to embed corresponding erased information during the restoration of the original data. Expand
MAL DATA MANIFOLD FOR ANOMALY LOCALIZATION
Autoencoder reconstructions are widely used for the task of unsupervised anomaly localization. Indeed, an autoencoder trained on normal data is expected to only be able to reconstruct normal featuresExpand
Anomaly Detection with Conditional Variational Autoencoders
TLDR
This work exploits the deep conditional variational autoencoder (CVAE) and defines an original loss function together with a metric that targets hierarchically structured data AD and shows the superior performance of this method for classical machine learning (ML) benchmarks and for the application. Expand
Iterative energy-based projection on a normal data manifold for anomaly localization
TLDR
This paper proposes a new approach for projecting anomalous data on a autoencoder-learned normal data manifold, by using gradient descent on an energy derived from the autoen coder's loss function, augmented with regularization terms that model priors on what constitutes the user-defined optimal projection. Expand
DAS: Deep Autoencoder with Scoring Neural Network for Anomaly Detection
Many anomaly detection methods are unsupervised e.g. they only utilize the non-anomalous data for model training. Data points that deviate from the majority of the pattern are deemed as anomalies.Expand
A Sparse Autoencoder Based Hyperspectral Anomaly Detection Algorihtm Using Residual of Reconstruction Error
TLDR
The proposed sparse autoencoder-based anomaly detector experimental results have been conducted into the San Diego airport dataset and the Urban area dataset and show that the proposed method outperforms other representative detection methods. Expand
RCA: A Deep Collaborative Autoencoder Approach for Anomaly Detection
  • Boyang Liu, Ding Wang, Kaixiang Lin, Pang-Ning Tan, Jiayu Zhou
  • Computer Science
  • IJCAI
  • 2021
TLDR
A robust framework using collaborative autoencoders to jointly identify normal observations from the data while learning its feature representation is proposed and empirical results show resiliency of the framework to missing values compared to other baseline methods. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 16 REFERENCES
Structured Denoising Autoencoder for Fault Detection and Analysis
TLDR
A new fault detection and analysis approach which can leverage incomplete prior information is proposed, called the structured denoising autoencoder (StrDA), which does not require specic information and can perform well without overtting. Expand
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. Expand
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference andExpand
Higher Order Contractive Auto-Encoder
TLDR
A novel regularizer when training an autoencoder for unsupervised feature extraction yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets. Expand
Anomaly detection: A survey
TLDR
This survey tries to provide a structured and comprehensive overview of the research on anomaly detection by grouping existing techniques into different categories based on the underlying approach adopted by each technique. Expand
Semi-supervised Learning with Deep Generative Models
TLDR
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning. Expand
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
TLDR
It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Expand
Auto-encoder bottleneck features using deep belief networks
TLDR
The experiments indicate that with the AE-BN architecture, pre-trained and deeper NNs produce better AE-NP features, and system combination with the GMM/HMM baseline andAE-BN systems provides an additional 0.5% absolute improvement on a larger Broadcast News task. Expand
Variational Bayesian Inference with Stochastic Search
TLDR
This work presents an alternative algorithm based on stochastic optimization that allows for direct optimization of the variational lower bound and demonstrates the approach on two non-conjugate models: logistic regression and an approximation to the HDP. Expand
...
1
2
...