Network Dissection: Quantifying Interpretability of Deep Visual Representations

@article{Bau2017NetworkDQ,
  title={Network Dissection: Quantifying Interpretability of Deep Visual Representations},
  author={David Bau and Bolei Zhou and Aditya Khosla and Aude Oliva and Antonio Torralba},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={3319-3327}
}
We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures… CONTINUE READING
Highly Influential
This paper has highly influenced 22 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 191 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 41 times over the past 90 days. VIEW TWEETS

Citations

Publications citing this paper.

192 Citations

050100150201720182019
Citations per Year
Semantic Scholar estimates that this publication has 192 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 47 references

Bengio . Understanding intermediate layers using linear classifier probes

  • K. Bala S. Bell
  • 2016

Efros . Split - brain autoencoders : Unsupervised learning by cross - channel prediction

  • P. Isola, A. A.
  • Proc . CVPR
  • 2016

Similar Papers

Loading similar papers…