# Revisiting Model Stitching to Compare Neural Representations

@article{Bansal2021RevisitingMS, title={Revisiting Model Stitching to Compare Neural Representations}, author={Yamini Bansal and Preetum Nakkiran and Boaz Barak}, journal={ArXiv}, year={2021}, volume={abs/2106.07682} }

We revisit and extend model stitching (Lenc & Vedaldi 2015) as a methodology to study the internal representations of neural networks. Given two trained and frozen models A and B, we consider a “stitched model” formed by connecting the bottom-layers of A to the top-layers of B, with a simple trainable layer between them. We argue that model stitching is a powerful and perhaps under-appreciated tool, which reveals aspects of representations that measures such as centered kernel alignment (CKA…

## Figures and Tables from this paper

## One Citation

Similarity and Matching of Neural Network Representations

- Computer ScienceArXiv
- 2021

It is demonstrated that the inner representations emerging in deep convolutional neural networks with the same architecture but different initializations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer.

## References

SHOWING 1-10 OF 36 REFERENCES

Similarity and Matching of Neural Network Representations

- Computer ScienceArXiv
- 2021

It is demonstrated that the inner representations emerging in deep convolutional neural networks with the same architecture but different initializations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer.

Generative Pretraining From Pixels

- Computer ScienceICML
- 2020

This work trains a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure, and finds that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification.

Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

- Computer Science, MathematicsNeurIPS
- 2018

The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively.

Insights on representational similarity in neural networks with canonical correlation

- Computer Science, MathematicsNeurIPS
- 2018

Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks.…

Understanding intermediate layers using linear classifier probes

- Computer Science, MathematicsICLR
- 2017

This work proposes to monitor the features at every layer of a model and measure how suitable they are for classification, using linear classifiers, which are referred to as "probes", trained entirely independently of the model itself.

Convergent Learning: Do different neural networks learn the same representations?

- Computer Science, MathematicsFE@NIPS
- 2015

This paper investigates the extent to which neural networks exhibit convergent learning, which is when the representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar low-dimensional spaces.

Deep Residual Learning for Image Recognition

- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Bad Global Minima Exist and SGD Can Reach Them

- Computer Science, MathematicsNeurIPS
- 2020

It is shown that regularization seems to provide SGD with an escape route: once heuristics such as data augmentation are used, starting from a complex model (adversarial initialization) has no effect on the test accuracy.

On the surprising similarities between supervised and self-supervised models

- Computer Science, BiologyArXiv
- 2020

Surprisingly, current self-supervised CNNs share four key characteristics of their supervised counterparts: relatively poor noise robustness,Non-human category-level error patterns, non-human image-levelerror patterns, high similarity to supervised model errors and a bias towards texture.

Understanding image representations by measuring their equivariance and equivalence

- Computer Science, Mathematics2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015

Three key mathematical properties of representations: equivariance, invariance, and equivalence are investigated and applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved.