• Publications
  • Influence
Performative Prediction in a Stateful World
TLDR
This work generalizes the results of Perdomo et al. (2020), who investigated "performative prediction" in a stateless setting to the case where the response of the population to the deployed classifier depends both on the classifier and the previous distribution of the Population.
Neural Networks are Surprisingly Modular
TLDR
A measurable notion of modularity is introduced for multi-layer perceptrons (MLPs) and it is found that MLPs that undergo training and weight pruning are often significantly more modular than random networks with the same distribution of weights.
Pruned Neural Networks are Surprisingly Modular
TLDR
A measurable notion of modularity for multi-layer perceptrons (MLPs) is introduced, and it is found that training and weight pruning produces MLPs that are more modular than randomly initialized ones, and often significantly more modules than random MLPs with the same (sparse) distribution of weights.
Clusterability in Neural Networks
The learned weights of a neural network have often been considered devoid of scrutable internal structure. In this paper, however, we look for structure in the form of clusterability: how well a
Data science meets law
Learning Responsible AI together.
Detecting Modularity in Deep Neural Networks
TLDR
It is suggested that graph-based partitioning can reveal modularity and help us understand how deep neural networks function.
Quantifying Local Specialization in Deep Neural Networks
TLDR
It is suggested that graph-based partitioning can reveal local specialization and that statistical methods can be used to automatedly screen for sets of neurons that can be understood abstractly.