Minimalistic Explanations: Capturing the Essence of Decisions

  title={Minimalistic Explanations: Capturing the Essence of Decisions},
  author={M. Schuessler and Philipp Wei{\ss}},
  journal={Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems},
  • M. Schuessler, Philipp Weiß
  • Published 2 May 2019
  • Computer Science
  • Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
The use of complex machine learning models can make systems opaque to users. Machine learning research proposes the use of post-hoc explanations. However, it is unclear if they give users insights into otherwise uninterpretable models. One minimalistic way of explaining image classifications by a deep neural network is to show only the areas that were decisive for the assignment of a label. In a pilot study, 20 participants looked at 14 of such explanations generated either by a human or the… 
What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice
This paper looked at the computer science (CS) community research to identify the main research themes about AI explainability, or “explainable AI”, and focuses on Human-Computer Interaction (HCI) research trying to answer three questions about the selected publications.
Evaluating saliency map explanations for convolutional neural networks: a user study
An online between-group user study designed to evaluate the performance of "saliency maps" - a popular explanation algorithm for image classification applications of CNNs indicates that saliency maps produced by the LRP algorithm helped participants to learn about some specific image features the system is sensitive to.
What's in a User? Towards Personalising Transparency for Music Recommender Interfaces
A study that investigated differences between personal characteristics of the perception and the gaze pattern of a music recommender interface in the presence and absence of explanations found that users with a high Musical Sophistication and a low Openness score benefit the most from explanations.


"Why Should I Trust You?": Explaining the Predictions of Any Classifier
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Explanation in Artificial Intelligence: Insights from the Social Sciences
This paper argues that the field of explainable artificial intelligence should build on existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics, and draws out some important findings.
Going beyond Visualization. Verbalization as Complementary Medium to Explain Machine Learning Models
In this position paper, we argue that a combination of visualization and verbalization techniques is beneficial for creating broad and versatile insights into the structure and decision-making
Towards A Rigorous Science of Interpretable Machine Learning
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Explanations as Mechanisms for Supporting Algorithmic Transparency
An online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see.
Principles of Explanatory Debugging to Personalize Interactive Machine Learning
An empirical evaluation shows that Explanatory Debugging increased participants' understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system.
Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
This work investigates how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers.
'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions
There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
Rethinking the Inception Architecture for Computer Vision
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.