Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

@article{Nguyen2015DeepNN,
  title={Deep neural networks are easily fooled: High confidence predictions for unrecognizable images},
  author={Anh M Nguyen and Jason Yosinski and Jeff Clune},
  journal={2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2015},
  pages={427-436}
}
Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. [] Key Result Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
CIFAR10 to Compare Visual Recognition Performance between Deep Neural Networks and Humans
TLDR
CIFAR10-based evaluations show very efficient object recognition of recent CNNs but prove that they are still far from human-level capability of generalization, and a detailed investigation using multiple levels of difficulty reveals that easy images for humans may not be easy for deep neural networks.
Confusing Deep Convolution Networks by Relabelling
TLDR
This paper presents a straightforward way to perturb an image in such a way as to cause it to acquire any other label from within the dataset while leaving this perturbed image visually indistinguishable from the original.
Deep Neural Networks Do Not Recognize Negative Images
TLDR
It is suggested that negative images can be thought as “semantic adversarial examples”, which are transformed inputs that semantically represent the same objects, but the model does not classify them correctly.
Towards Open Set Deep Networks
  • Abhijit Bendale, T. Boult
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities, and it is proved that the OpenMax concept provides bounded open space risk, thereby formally providing anopen set recognition solution.
Sparse Fooling Images: Fooling Machine Perception through Unrecognizable Images
TLDR
A new class of fooling images, sparse fooled images (SFIs), which are single color images with a small number of altered pixels that are recognizable as natural objects and classified to certain classes with high confidence scores are proposed.
Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder
TLDR
It is argued that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances, and should be controlled in the image datasets used in vision research.
Identifying Simple Shapes to Classify the Big Picture
TLDR
This work develops a novel DNN-LCS system where the former extracts features from pixels and the latter classifies objects from these features with clear decision boundaries, and results show that the system can explain its classification decisions on curated image data.
A Taxonomy of Deep Convolutional Neural Nets for Computer Vision
TLDR
A recipe-style survey of one form of deep networks widely used in computer vision - convolutional neural networks (CNNs) is considered and it is hoped that this guide will serve as a guide, particularly for novice practitioners intending to use deep-learning techniques for computer vision.
On the Limitation of Convolutional Neural Networks in Recognizing Negative Images
TLDR
Whether CNNs are capable of learning the semantics of training data is examined, and it is conjecture that current training methods do not effectively train models to generalize the concepts.
...
...

References

SHOWING 1-10 OF 64 REFERENCES
How transferable are features in deep neural networks?
TLDR
This paper quantifies the generality versus specificity of neurons in each layer of a deep convolutional neural network and reports a few surprising results, including that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Visualizing Higher-Layer Features of a Deep Network
TLDR
This paper contrast and compare several techniques applied on Stacked Denoising Autoencoders and Deep Belief Networks, trained on several vision datasets, and shows that good qualitative interpretations of high level features represented by such models are possible at the unit level.
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition
DeepFace: Closing the Gap to Human-Level Performance in Face Verification
TLDR
This work revisits both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.
Why Does Unsupervised Pre-training Help Deep Learning?
TLDR
The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre- training.
Learning Deep Architectures for AI
TLDR
The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
ImageNet Large Scale Visual Recognition Challenge
TLDR
The creation of this benchmark dataset and the advances in object recognition that have been possible as a result are described, and the state-of-the-art computer vision accuracy with human accuracy is compared.
Learning multiple layers of representation
...
...