Corpus ID: 219981786

Pruned Neural Networks are Surprisingly Modular

@inproceedings{Filan2020PrunedNN,
  title={Pruned Neural Networks are Surprisingly Modular},
  author={Daniel Filan and Shlomi Hod and Cody Wild and Andrew Critch and Stuart Russell},
  year={2020}
}
  • Daniel Filan, Shlomi Hod, +2 authors Stuart Russell
  • Published 2020
  • Computer Science
  • The learned weights of a neural network are often considered devoid of scrutable internal structure. To discern structure in these weights, we introduce a measurable notion of modularity for multi-layer perceptrons (MLPs), and investigate the modular structure of MLPs trained on datasets of small images. Our notion of modularity comes from the graph clustering literature: a ``module'' is a set of neurons with strong internal connectivity but weak external connectivity. We find that training and… CONTINUE READING

    Topics from this paper.