• Corpus ID: 245650884

Representation Topology Divergence: A Method for Comparing Neural Network Representations

@inproceedings{Barannikov2022RepresentationTD,
  title={Representation Topology Divergence: A Method for Comparing Neural Network Representations},
  author={S. Barannikov and Ilya Trofimov and Nikita Balabin and Evgeny Burnaev},
  booktitle={ICML},
  year={2022}
}
Comparison of data representations is a complex multi-aspect problem that has no complete solu-tion yet. We propose a method for comparing two data representations. We introduce the Representation Topology Divergence (RTD) which measures the dissimilarity in multi-scale topology between two point clouds of equal size with a one-to-one correspondence between points. The data point clouds are allowed to lie in different ambient spaces. The RTD is one of the few practical methods based on… 

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

TLDR
In this survey, literature on techniques for interpreting the inner components of DNNs, which are called inner interpretability methods are reviewed, with a focus on how these techniques relate to the goal of designing safer, more trustworthy AI systems.

Acceptability Judgements via Examining the Topology of Attention Maps

TLDR
This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the geometric properties of the attention graph can be exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs.

References

SHOWING 1-10 OF 49 REFERENCES

The Shape of Data: Intrinsic Distance for Data Distributions

TLDR
This work develops a first-of-its-kind intrinsic and multi-scale method for characterizing and comparing data manifolds, using a lower-bound of the spectral variant of the Gromov-Wasserstein inter-manifold distance, which compares all data moments.

SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability

We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing

BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning

TLDR
BatchEnsemble is proposed, an ensemble method whose computational and memory costs are significantly lower than typical ensembles and can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks.

Similarity of Neural Network Representations Revisited

TLDR
A similarity index is introduced that measures the relationship between representational similarity matrices and does not suffer from this limitation of CCA.

Deep Residual Learning for Image Recognition

TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy

TLDR
Although there are proven connections between diversity and accuracy in some special cases, the results raise some doubts about the usefulness of diversity measures in building classifier ensembles in real-life pattern recognition problems.

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

TLDR
This work proposes a novel framework called training-free neural architecture search (TE-NAS), which ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space and shows that these two measurements imply the trainability and expressivity of a neural network.

Searching for a Robust Neural Architecture in Four GPU Hours

  • Xuanyi DongYezhou Yang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
The approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS), and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art.

DARTS: Differentiable Architecture Search

TLDR
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.

Very Deep Convolutional Networks for Large-Scale Image Recognition

TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.