Interpretability of deep learning models: A survey of results

@article{Chakraborty2017InterpretabilityOD,
  title={Interpretability of deep learning models: A survey of results},
  author={Supriyo Chakraborty and Richard Tomsett and Ramya Raghavendra and Daniel Harborne and Moustafa Alzantot and Federico Cerutti and Mani B. Srivastava and Alun D. Preece and Simon J. Julier and Raghuveer M. Rao and Troy D. Kelley and Dave Braines and Murat Sensoy and Christopher J. Willis and Prudhvi Gurram},
  journal={2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)},
  year={2017},
  pages={1-6}
}
Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 23 CITATIONS

ARL-TR-8636 ● FEB 2019 Issues in Human ‒ Agent Communication

M. J. Barnes, Shan Lakhmani, Eric Holder, JYC Chen
  • 2019
VIEW 9 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

Big Data, Big Challenges: A Healthcare Perspective

  • Lecture Notes in Bioengineering
  • 2019
VIEW 2 EXCERPTS
CITES BACKGROUND

Explaining What a Neural Network has Learned : Toward Transparent Classification

Kasun Amarasinghe, Milos Manic
  • 2019
VIEW 1 EXCERPT
CITES METHODS

Free-Lunch Saliency via Attention in Atari Agents

Dmitry Nikulin, Anastasia Ianina, Vladimir Aliev, Sergey Nikolenko
  • ArXiv
  • 2019
VIEW 1 EXCERPT
CITES BACKGROUND

NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning

  • 2019 IEEE International Conference on Smart Computing (SMARTCOMP)
  • 2019
VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 48 REFERENCES

The mythos of model interpretability

Z. C. Lipton
  • CoRR, vol. abs/1606.03490, 2016.
  • 2016
VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL