Minimalistic Explanations: Capturing the Essence of Decisions

@article{Schuessler2019MinimalisticEC,
  title={Minimalistic Explanations: Capturing the Essence of Decisions},
  author={M. Schuessler and Philipp Wei{\ss}},
  journal={Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems},
  year={2019}
}
  • M. Schuessler, Philipp Weiß
  • Published 2 May 2019
  • Computer Science
  • Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
The use of complex machine learning models can make systems opaque to users. Machine learning research proposes the use of post-hoc explanations. However, it is unclear if they give users insights into otherwise uninterpretable models. One minimalistic way of explaining image classifications by a deep neural network is to show only the areas that were decisive for the assignment of a label. In a pilot study, 20 participants looked at 14 of such explanations generated either by a human or the… Expand
What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice
TLDR
This paper looked at the computer science (CS) community research to identify the main research themes about AI explainability, or “explainable AI”, and focuses on Human-Computer Interaction (HCI) research trying to answer three questions about the selected publications. Expand
Evaluating saliency map explanations for convolutional neural networks: a user study
TLDR
An online between-group user study designed to evaluate the performance of "saliency maps" - a popular explanation algorithm for image classification applications of CNNs indicates that saliency maps produced by the LRP algorithm helped participants to learn about some specific image features the system is sensitive to. Expand
What's in a User? Towards Personalising Transparency for Music Recommender Interfaces
TLDR
A study that investigated differences between personal characteristics of the perception and the gaze pattern of a music recommender interface in the presence and absence of explanations found that users with a high Musical Sophistication and a low Openness score benefit the most from explanations. Expand

References

SHOWING 1-10 OF 18 REFERENCES
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
Explanation in Artificial Intelligence: Insights from the Social Sciences
TLDR
This paper argues that the field of explainable artificial intelligence should build on existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics, and draws out some important findings. Expand
Going beyond Visualization. Verbalization as Complementary Medium to Explain Machine Learning Models
In this position paper, we argue that a combination of visualization and verbalization techniques is beneficial for creating broad and versatile insights into the structure and decision-makingExpand
Towards A Rigorous Science of Interpretable Machine Learning
TLDR
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning. Expand
Explanations as Mechanisms for Supporting Algorithmic Transparency
TLDR
An online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. Expand
Principles of Explanatory Debugging to Personalize Interactive Machine Learning
TLDR
An empirical evaluation shows that Explanatory Debugging increased participants' understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system. Expand
Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
TLDR
This work investigates how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Expand
'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions
TLDR
There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions. Expand
Rethinking the Inception Architecture for Computer Vision
TLDR
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. Expand
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TLDR
The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields. Expand
...
1
2
...