• Corpus ID: 232135069

Clusterability in Neural Networks

@article{Filan2021ClusterabilityIN,
  title={Clusterability in Neural Networks},
  author={Daniel Filan and Stephen Casper and Shlomi Hod and Cody Wild and Andrew Critch and Stuart J. Russell},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.03386}
}
The learned weights of a neural network have often been considered devoid of scrutable internal structure. In this paper, however, we look for structure in the form of clusterability: how well a network can be divided into groups of neurons with strong internal connectivity but weak external connectivity. We find that a trained neural network is typically more clusterable than randomly initialized networks, and often clusterable relative to random networks with the same distribution of weights… 

Detecting Modularity in Deep Neural Networks

It is suggested that graph-based partitioning can reveal modularity and help us understand how deep neural networks function.

Quantifying Local Specialization in Deep Neural Networks

It is suggested that graph-based partitioning can reveal local specialization and that statistical methods can be used to automatedly screen for sets of neurons that can be understood abstractly.

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

A taxonomy that classifies “inner” interpretability techniques by what part of the network they help to explain and whether they are implemented during (intrinsic) or after (post hoc) training is introduced.

Convolutional Neural Network Dynamics: A Graph Perspective

This paper proposes representing the neural network learning process as a time-evolving graph, and capturing the structural changes of the NN during the training phase in a simple temporal summary, and leveraging the structural summary to predict the accuracy of the underlying NN in a classification or regression task.

G RAPHICAL C LUSTERABILITY AND L OCAL S PECIALIZATION IN D EEP N EURAL N ETWORKS

The learned weights of deep neural networks have often been considered devoid of scrutable internal structure, and tools for studying them have not traditionally relied on techniques from network

Graph Modularity: Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks

It is demonstrated that modularity can be used to identify and locate redundant layers in DNNs, which provides theoretical guidance for layer pruning and is proposed as a layer-wise pruning method based on modularity.

Emergent Structures and Training Dynamics in Large Language Models

It is noted in particular the lack of sufficient research on the emergence of functional units, subsections of the network where related functions are grouped or organised, within large language models and motivated work that grounds the study of language models in an analysis of their changing internal structure during training time.

SpARC: Sparsity Activation Regularization for Consistency

This work designs a method of jointly penalising model activations through the L1 norm and employing a contrastive similarity loss between pairs of “similar" and “dissimilar" facts to finetune large language models to make them logically consistent.

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

This paper test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias.

Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment

This work introduces what it calls the modularity criterion for testing whether a learning algorithm satisfies this constraint by performing causal analysis on the algorithm itself, and proves that for decision sequences that do not contain cycles, certain single-step temporal difference action-value methods meet this criterion while all policy-gradient methods do not.

References

SHOWING 1-10 OF 45 REFERENCES

Understanding Community Structure in Layered Neural Networks

Modular representation of layered neural networks

Graph Structure of Neural Networks

A novel graph-based representation of neural networks called relational graph is developed, where layers of neural network computation correspond to rounds of message exchange along the graph structure, which shows that a "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.

Interpreting Layered Neural Networks via Hierarchical Modular Representation

The application of a hierarchical clustering method to a trained network reveals a tree-structured relationship among hidden layer units, based on their feature vectors defined by their correlation with the input and output dimension values.

Linear Mode Connectivity and the Lottery Ticket Hypothesis

This work finds that standard vision models become stable to SGD noise in this way early in training, and uses this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

This work finds that dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations, and articulate the "lottery ticket hypothesis".

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

This paper studies the three critical components of the Lottery Ticket algorithm, showing that each may be varied significantly without impacting the overall results, and shows why setting weights to zero is important, how signs are all you need to make the reinitialized network train, and why masks behaves like training.

Checking Functional Modularity in DNN By Biclustering Task-specific Hidden Neurons

A hidden layer is dissected into disjoint groups of task-specific hidden neurons with the help of relatively wellstudied neuron attribution methods to investigate functional modularity in DNN trained through back-propagation.

To prune, or not to prune: exploring the efficacy of pruning for model compression

Across a broad range of neural network architectures, large-sparse models are found to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.