• Publications
  • Influence
Network Dissection: Quantifying Interpretability of Deep Visual Representations
This work uses the proposed Network Dissection method to test the hypothesis that interpretability is an axis-independent property of the representation space, then applies the method to compare the latent representations of various networks when trained to solve different classification problems.
Explaining Explanations: An Overview of Interpretability of Machine Learning
There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
This work presents an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level, and provides open source interpretation tools to help researchers and practitioners better understand their GAN models.
Interpreting Deep Visual Representations via Network Dissection
Network Dissection is described, a method that interprets networks by providing meaningful labels to their individual units that reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis.
Seeing What a GAN Cannot Generate
This work visualize mode collapse at both the distribution level and the instance level, and deploys a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set.
Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
The definition of explainability is provided and how it can be used to classify existing literature is shown and discussed to create best practices and identify open challenges in explanatory artificial intelligence.
Interpretable Basis Decomposition for Visual Explanation
A new framework called Interpretable Basis Decomposition for providing visual explanations for classification networks is proposed, decomposing the neural activations of the input image into semantically interpretable components pre-trained from a large concept corpus.
Revisiting the Importance of Individual Units in CNNs via Ablation
The results show that units with high selectivity play an important role in network classification power at the individual class level and that class selectivity along with other attributes are good predictors of the importance of one unit to individual classes.
Learnable programming
New blocks frameworks open doors to greater experimentation for novices and professionals alike and provide opportunities for new blocks frameworks to be developed and tested.