Why and why not explanations improve the intelligibility of context-aware intelligent systems


Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.

DOI: 10.1145/1518701.1519023

Extracted Key Phrases

12 Figures and Tables

Citations per Year

163 Citations

Semantic Scholar estimates that this publication has 163 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Lim2009WhyAW, title={Why and why not explanations improve the intelligibility of context-aware intelligent systems}, author={Brian Y. Lim and Anind K. Dey and Daniel Avrahami}, booktitle={CHI}, year={2009} }