• Corpus ID: 229312114

OAAE: Adversarial Autoencoders for Novelty Detection in Multi-modal Normality Case via Orthogonalized Latent Space

@article{An2021OAAEAA,
  title={OAAE: Adversarial Autoencoders for Novelty Detection in Multi-modal Normality Case via Orthogonalized Latent Space},
  author={Sungkwon An and Jeonghoon Kim and Myung-joo Kang and Shahbaz Razaei and Xin Liu},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.02358}
}
Novelty detection using deep generative models such as autoencoder, generative adversarial networks mostly takes image reconstruction error as novelty score function. However, image data, high dimensional as it is, contains a lot of different features other than class information which makes models hard to detect novelty data. The problem gets harder in multimodal normality case. To address this challenge, we propose a new way of measuring novelty score in multi-modal normality cases using… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 14 REFERENCES
OCGAN: One-Class Novelty Detection Using GANs With Constrained Latent Representations
TLDR
A novel model called OCGAN is presented for the classical problem of one-class novelty detection, where, given a set of examples from a particular class, the goal is to determine if a query example is from the same class using a de-noising auto-encoder network.
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
TLDR
This work makes the computation of the novelty probability feasible because it linearize the parameterized manifold capturing the underlying structure of the inlier distribution, and shows how the probability factorizes and can be computed with respect to local coordinates of the manifold tangent space.
Adversarial Autoencoders
TLDR
This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets.
RaPP: Novelty Detection with Reconstruction along Projection Pathway
TLDR
Through extensive experiments using diverse datasets, it is validated that RaPP improves novelty detection performances of autoencoder-based approaches and outperforms recent novelty detection methods evaluated on popular benchmarks.
OLE: Orthogonal Low-rank Embedding, A Plug and Play Geometric Loss for Deep Learning
TLDR
This paper proposes a plug-and-play loss term for deep networks that explicitly reduces intra-class variance and enforces inter-class margin simultaneously, in a simple and elegant geometric manner, and demonstrates improved classification performance in general object recognition, plugging the proposed loss term into existing off-the-shelf architectures.
Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery
TLDR
AnoGAN, a deep convolutional generative adversarial network is proposed to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space.
Learning transformations for clustering and classification
TLDR
A low-rank transformation learning framework for subspace clustering and classification that significantly enhances the performance of existing sub space clustering methods and efficiently combines robust PCA with sparse modeling is proposed.
Variational Autoencoder based Anomaly Detection using Reconstruction Probability
TLDR
The reconstruction probability has a theoretical background making it a more principled and objective anomaly score than the reconstruction error, which is used by autoencoder and principal components based anomaly detection methods.
Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction
TLDR
It is demonstrated that autoencoders are able to detect subtle anomalies which linear PCA fails and can be useful as nonlinear techniques without complex computation as kernel PCA requires.
Robust principal component analysis?
TLDR
It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.
...
...