Corpus ID: 220525323

Visualizing Transfer Learning

  title={Visualizing Transfer Learning},
  author={R'obert Szab'o and D'aniel Katona and M{\'a}rton Csillag and Adri'an Csisz'arik and D{\'a}niel Varga},
We provide visualizations of individual neurons of a deep image recognition network during the temporal process of transfer learning. These visualizations qualitatively demonstrate various novel properties of the transfer learning process regarding the speed and characteristics of adaptation, neuron reuse, spatial scale of the represented image features, and behavior of transfer learning to small data. We publish the large-scale dataset that we have created for the purposes of this analysis. 

Figures and Topics from this paper

Interactive Analysis of CNN Robustness — Supplemental Document —
• texture influence (Section A.1 and Section A.2), • shape sensitivity (Section A.3), • low frequency information (Section A.4), • high frequency information (Section A.5), • adversarial attacksExpand
SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation
SIM2REALVIZ, a visual analytics tool to assist experts in understanding and reducing this gap for robot ego-pose estimation tasks, i.e. the estimation of a robot’s position using trained models, is introduced. Expand
An analysis of the transfer learning of convolutional neural networks for artistic images
This paper uses techniques for visualizing the network internal representations in order to provide clues to the understanding of what the network has learned on artistic images, and provides a quantitative analysis of the changes introduced by the learning process thanks to metrics in both the feature and parameter spaces. Expand


TopoAct: Exploring the Shape of Activations in Deep Learning
TopoAct is presented, a visual exploration system used to study topological summaries of activation vectors for a single layer as well as the evolution of such summaries across multiple layers that provide valuable insights towards learned representations of an image classifier. Expand
Visualizing and Understanding Convolutional Networks
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
Network Dissection: Quantifying Interpretability of Deep Visual Representations
This work uses the proposed Network Dissection method to test the hypothesis that interpretability is an axis-independent property of the representation space, then applies the method to compare the latent representations of various networks when trained to solve different classification problems. Expand
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual RecognitionExpand
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks. Expand
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
This work dramatically improves the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN), which generates qualitatively state-of-the-art synthetic images that look almost real. Expand
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image. Expand
Understanding deep image representations by inverting them
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding ofExpand
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neuralExpand
Intriguing properties of neural networks
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand