Visualizing and Understanding Convolutional Networks

@inproceedings{Zeiler2014VisualizingAU,
  title={Visualizing and Understanding Convolutional Networks},
  author={Matthew D. Zeiler and Rob Fergus},
  booktitle={ECCV},
  year={2014}
}
Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18. [...] Key Method Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax…Expand
A Taxonomy and Library for Visualizing Learned Features in Convolutional Neural Networks
TLDR
The FeatureVis library for MatConvNet is introduced: an extendable, easy to use open source library for visualizing CNNs, which contains implementations from each of the three main classes of visualization methods and serves as a useful tool for an enhanced understanding of the features learned by intermediate layers.
Understanding Neural Networks Through Deep Visualization
TLDR
This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks.
Analysis and Optimization of Convolutional Neural Network Architectures
TLDR
A model which has only one million learned parameters for an input size of 32x32x3 and 100 classes and which beats the state of the art on the benchmark dataset Asirra, GTSRB, HASYv2 and STL-10 was developed.
Fusing Deep Convolutional Networks for Large Scale Visual Concept Classification
  • H. Ergun, M. Sert
  • Computer Science
    2016 IEEE Second International Conference on Multimedia Big Data (BigMM)
  • 2016
TLDR
This study investigates various aspects of convolutional neural networks (CNNs) from the big data perspective, and proposes efficient fusion mechanisms both for single and multiple network models.
Evaluating the Visualization of What a Deep Neural Network Has Learned
TLDR
A general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps and shows that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method.
Tiny ImageNet Visual Recognition Challenge
TLDR
This work tries to train a relatively deep network with a large number of filters per convolutional layer to achieve a high accuracy on the test dataset, and trains another classifier that is slightly shallower and has fewer number of parameters several times, to build a dataset that will allow for a thorough study of ensemble techniques.
Understanding Convolutional Neural Networks in Terms of Category-Level Attributes
TLDR
This study conjecture that the learned representation can be interpreted as category-level attributes that have good properties and performs zero-shot learning by regarding the activation pattern of upper layers as attributes describing the categories.
Visualization Methods for Image Transformation Convolutional Neural Networks
TLDR
This paper uses the knowledge obtained from the visualization of an image restoration CNN to improve the architecture’s efficiency with no significant degradation of its performance.
Visualization of feature evolution during convolutional neural network training
TLDR
This work elucidates the process by which CNNs learn effective task-specific features by applying recent deep visualization techniques during different stages of the training process and shows a new facet of a particularly vexing machine learning pitfall: overfitting.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
TLDR
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks.
Visualizing Higher-Layer Features of a Deep Network
TLDR
This paper contrast and compare several techniques applied on Stacked Denoising Autoencoders and Deep Belief Networks, trained on several vision datasets, and shows that good qualitative interpretations of high level features represented by such models are possible at the unit level.
Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks
TLDR
This work designs a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset, and shows that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Some Improvements on Deep Convolutional Neural Network Based Image Classification
TLDR
This paper summarizes the entry in the Imagenet Large Scale Visual Recognition Challenge 2013, which achieved a top 5 classification error rate and achieved over a 20% relative improvement on the previous year's winner.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
TLDR
This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
Efficient learning of sparse, distributed, convolutional feature representations for object recognition
TLDR
This is the first work showing that RBMs can be trained with almost no hyperparameter tuning to provide classification performance similar to or significantly better than mixture models (e.g., Gaussian mixture models).
Adaptive deconvolutional networks for mid and high level feature learning
TLDR
A hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling, relying on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
...
1
2
3
4
...