Adversarial Robustness: Softmax versus Openmax

@article{Rozsa2017AdversarialRS,
  title={Adversarial Robustness: Softmax versus Openmax},
  author={Andras Rozsa and Manuel G{\"u}nther and Terrance E. Boult},
  journal={ArXiv},
  year={2017},
  volume={abs/1708.01697}
}
Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications. However, it was discovered that machine learning models, including the best performing DNNs, suffer from a fundamental problem: they can unexpectedly and confidently misclassify examples formed by slightly perturbing otherwise correctly recognized inputs. Various approaches have been developed for efficiently generating these so-called adversarial examples, but those… Expand
Disentangling Adversarial Robustness and Generalization
TLDR
This work assumes an underlying, low-dimensional data manifold and shows that regular robustness and generalization are not necessarily contradicting goals, which implies that both robust and accurate models are possible. Expand
Adversarial Attack on Deep Learning-Based Splice Localization
TLDR
This work demonstrates on three non end-to-end deep learning-based splice localization tools that hiding manipulations of images is feasible via adversarial attacks and finds that the formed ad- versarialperturbations can be transferable among them regarding the deterioration of their localization performance. Expand
Adversarial Motorial Prototype Framework for Open Set Recognition
TLDR
An upgraded version of the AMPF, AMPF++, is proposed, which adds much more generated unknown samples into the training phase and can further improve the differential mapping ability of the model to known and unknown classes with the adversarial motion of the margin constraint radius. Expand
FONTS True Manifold FONTS Learned Manifold EMNIST Learned Manifold F-MNIST Learned Manifold CelebA
Obtaining deep networks that are robust against adversarial examples and generalize well is an open problem. A recent hypothesis [102, 95] even states that both robust and accurate models areExpand
MMF: A loss extension for feature learning in open set recognition
TLDR
This paper proposes an add-on extension for loss functions in neural networks to address the open set recognition problem and introduces an extension that can be incorporated into different loss functions to find more discriminative representations. Expand
Learning and the Unknown: Surveying Steps toward Open World Recognition
TLDR
This paper summarizes the state of the art, core ideas, and results and explains why, despite the efforts to date, the current techniques are genuinely insufficient for handling unknown inputs, especially for deep networks. Expand
Recent Advances in Open Set Recognition: A Survey
TLDR
This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, evaluation criteria, and algorithm comparisons to highlight the limitations of existing approaches and point out some promising subsequent research directions. Expand
Open Set Learning with Counterfactual Images
TLDR
This work introduces a dataset augmentation technique that is based on generative adversarial networks that generates examples that are close to training set examples yet do not belong to any training category, and outperforms existing open set recognition algorithms on a selection of image classification tasks. Expand
Collective Decision for Open Set Recognition
TLDR
A novel collective/batch decision strategy with an aim to extend existing OSR for new class discovery while considering correlations among the testing instances is introduced by slightly modifying the Hierarchical Dirichlet process (HDP). Expand
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges
TLDR
This survey aims to provide a cross-domain and comprehensive review of numerous eminent works in respective areas while identifying their commonalities and discusses and shed light on future lines of research, intending to bring these fields closer together. Expand
...
1
2
...

References

SHOWING 1-10 OF 25 REFERENCES
Adversarial Diversity and Hard Positive Generation
TLDR
A new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, the notion of hard positive generation is introduced, and a novel hot/cold approach for adversarial example generation is presented, which provides multiple possible adversarial perturbations for every single image. Expand
Adversarial Machine Learning at Scale
TLDR
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Adversarial Manipulation of Deep Representations
TLDR
While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. Expand
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
TLDR
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision. Expand
Towards Open Set Deep Networks
  • Abhijit Bendale, T. Boult
  • Computer Science
  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities, and it is proved that the OpenMax concept provides bounded open space risk, thereby formally providing anopen set recognition solution. Expand
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms. Expand
CNN Features Off-the-Shelf: An Astounding Baseline for Recognition
TLDR
A series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13 suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. Expand
...
1
2
3
...