• Corpus ID: 247218369

LIMEADE: From AI Explanations to Advice Taking

  title={LIMEADE: From AI Explanations to Advice Taking},
  author={B. Lee and Doug Downey and Kyle Lo and Daniel S. Weld},
X iv :2 00 3. 04 31 5v 3 [ cs .I R ] 1 M ar 2 02 2 LIMEADE: From AI Explanations to Advice Taking BENJAMIN CHARLES GERMAIN LEE, University of Washington & Allen Institute for Artificial Intelligence, USA DOUG DOWNEY, Allen Institute for Artificial Intelligence, USA KYLE LO, Allen Institute for Artificial Intelligence, USA DANIEL S. WELD, University of Washington & Allen Institute for Artificial Intelligence, USA 

Figures and Tables from this paper


Principles of Explanatory Debugging to Personalize Interactive Machine Learning
An empirical evaluation shows that Explanatory Debugging increased participants' understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system.
Programs with common sense
Abstract : This paper discusses programs to manipulate in a suitable formal language (most likely a part of the predicate calculus) common instrumental statements. The basic program will draw
Too much, too little, or just right? Ways explanations impact end users' mental models
It is suggested that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations.
Guidelines for Human-AI Interaction
This work proposes 18 generally applicable design guidelines for human-AI interaction that can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human- AI interaction design principles.
Towards A Rigorous Science of Interpretable Machine Learning
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
This work introduces a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Improving Controllability and Predictability of Interactive Recommendation Interfaces for Exploratory Search
Improvements in task performance, usability, perceived usefulness and user acceptance are presented in a visual user-controllable search interface involving exploratory search for scientific literature.
Explaining collaborative filtering recommendations
This paper presents experimental evidence that shows that providing explanations can improve the acceptance of ACF systems, and presents a model for explanations based on the user's conceptual model of the recommendation process.
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of