Corpus ID: 604334

Intriguing properties of neural networks

@article{Szegedy2014IntriguingPO,
  title={Intriguing properties of neural networks},
  author={Christian Szegedy and Wojciech Zaremba and Ilya Sutskever and Joan Bruna and D. Erhan and Ian J. Goodfellow and Rob Fergus},
  journal={CoRR},
  year={2014},
  volume={abs/1312.6199}
}
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. [...] Key Result In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.Expand
Intriguing Properties of Learned Representations
  • 2018
On the Spectral Bias of Neural Networks
TLDR
This work shows that deep ReLU networks are biased towards low frequency functions, and studies the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions. Expand
Incorporating Prototype Theory in Convolutional Neural Networks
TLDR
This work proposes computational models to improve the generalization capacity of CNNs by considering how typical a training image looks like, and shows that involving a typicality measure can improve the classification results on a new set of images by a large margin. Expand
PERCEPTUAL DEEP NEURAL NETWORKS: ADVER-
  • 2020
Adversarial examples have shown that albeit highly accurate, models learned by machines, differently from humans, have many weaknesses. However, humans’ perception is also fundamentally differentExpand
Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks
TLDR
The generalization error of a model trained on representative data can be bounded by its feature robustness which depends on the novel flatness measure, which implies a robustness of the network to changes in the input and the hidden layers. Expand
Dense Associative Memory Is Robust to Adversarial Inputs
TLDR
DAMs with higher-order energy functions are more robust to adversarial and rubbish inputs than DNNs with rectified linear units and open up the possibility of using higher- order models for detecting and stopping malicious adversarial attacks. Expand
Exploring LOTS in Deep Neural Networks
TLDR
The layerwise origin-target synthesis (LOTS) is introduced that can serve multiple purposes and can be used as a visualization technique that gives insights into the function of any intermediate feature layer by showing the notion of a particular input in deep neural networks. Expand
Towards Distortion-Predictable Embedding of Neural Networks
TLDR
This work proposes a new loss function, derived from the contrastive loss, that creates models with more predicable mappings and also quantifies distortions, and makes a step towards embeddings where features of distorted inputs are related and can be derived from each others by the intensity of the distortion. Expand
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
TLDR
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision. Expand
On the Blindspots of Convolutional Networks
TLDR
This work demonstrates that convolutional networks have limitations that may, in some cases, hinder it from learning properties of the data, which are easily recognizable by traditional, less demanding, models. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 19 REFERENCES
Measuring Invariances in Deep Networks
TLDR
A number of empirical tests are proposed that directly measure the degree to which these learned features are invariant to different input transformations and find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images and convolutional deep belief networks learn substantially more invariant Features in each layer. Expand
Visualizing Higher-Layer Features of a Deep Network
TLDR
This paper contrast and compare several techniques applied on Stacked Denoising Autoencoders and Deep Belief Networks, trained on several vision datasets, and shows that good qualitative interpretations of high level features represented by such models are possible at the unit level. Expand
Learning Deep Architectures for AI
TLDR
The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed. Expand
Building high-level features using large scale unsupervised learning
TLDR
Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Expand
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
A discriminatively trained, multiscale, deformable part model
TLDR
A discriminatively trained, multiscale, deformable part model for object detection, which achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge and outperforms the best results in the 2007 challenge in ten out of twenty categories. Expand
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
TLDR
This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Expand
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups
TLDR
This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition. Expand
Efficient Estimation of Word Representations in Vector Space
TLDR
Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities. Expand
...
1
2
...