• Publications
  • Influence
Deep One-Class Classification
TLDR
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs. Expand
lp-Norm Multiple Kernel Learning
l p -Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernelExpand
Efficient and Accurate Lp-Norm Multiple Kernel Learning
TLDR
This work devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary p > 1 and applies lp-norm MKL to real-world problems from computational biology, showing that non-sparse MKL achieves accuracies that go beyond the state-of-the-art. Expand
Deep Semi-Supervised Anomaly Detection
TLDR
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method. Expand
Image Anomaly Detection with Generative Adversarial Networks
TLDR
This work proposes a novel approach to anomaly detection using generative adversarial networks, based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. Expand
Toward Supervised Anomaly Detection
TLDR
It is argued that semi-supervised anomaly detection needs to ground on the unsupervised learning paradigm and devise a novel algorithm that meets this requirement and it is shown that the optimization problem has a convex equivalent under relatively mild assumptions. Expand
` p-Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernelExpand
Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms
TLDR
A data-dependent generalization error bound with a logarithmic dependence on the class size is obtained, substantially improving the state-of-the-art linear dependence in the existing data- dependent generalization analysis. Expand
A Unifying View of Multiple Kernel Learning
TLDR
This paper presents a unifying optimization criterion for multiple kernel learning and shows how existing formulations are subsumed as special cases and derives the criterion's dual representation, which is suitable for general smooth optimization algorithms. Expand
...
1
2
3
4
5
...