• Corpus ID: 247218369

LIMEADE: From AI Explanations to Advice Taking

@inproceedings{Lee2020LIMEADEFA,
  title={LIMEADE: From AI Explanations to Advice Taking},
  author={B. Lee and Doug Downey and Kyle Lo and Daniel S. Weld},
  year={2020}
}
X iv :2 00 3. 04 31 5v 3 [ cs .I R ] 1 M ar 2 02 2 LIMEADE: From AI Explanations to Advice Taking BENJAMIN CHARLES GERMAIN LEE, University of Washington & Allen Institute for Artificial Intelligence, USA DOUG DOWNEY, Allen Institute for Artificial Intelligence, USA KYLE LO, Allen Institute for Artificial Intelligence, USA DANIEL S. WELD, University of Washington & Allen Institute for Artificial Intelligence, USA 

Figures and Tables from this paper

References

SHOWING 1-10 OF 87 REFERENCES
Principles of Explanatory Debugging to Personalize Interactive Machine Learning
TLDR
An empirical evaluation shows that Explanatory Debugging increased participants' understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system.
Programs with common sense
Abstract : This paper discusses programs to manipulate in a suitable formal language (most likely a part of the predicate calculus) common instrumental statements. The basic program will draw
Too much, too little, or just right? Ways explanations impact end users' mental models
TLDR
It is suggested that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations.
Guidelines for Human-AI Interaction
TLDR
This work proposes 18 generally applicable design guidelines for human-AI interaction that can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human- AI interaction design principles.
Towards A Rigorous Science of Interpretable Machine Learning
TLDR
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
TLDR
This work introduces a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Improving Controllability and Predictability of Interactive Recommendation Interfaces for Exploratory Search
TLDR
Improvements in task performance, usability, perceived usefulness and user acceptance are presented in a visual user-controllable search interface involving exploratory search for scientific literature.
Explaining collaborative filtering recommendations
TLDR
This paper presents experimental evidence that shows that providing explanations can improve the acceptance of ACF systems, and presents a model for explanations based on the user's conceptual model of the recommendation process.
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of
...
...