Corpus ID: 235743234

Understanding Consumer Preferences for Explanations Generated by XAI Algorithms

@article{Ramon2021UnderstandingCP,
  title={Understanding Consumer Preferences for Explanations Generated by XAI Algorithms},
  author={Yanou Ramon and Tom Vermeire and Olivier Toubia and David Martens and Theodoros Evgeniou},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.02624}
}
Explaining firm decisions made by algorithms in customer-facing applications is increasingly required by regulators and expected by customers. While the emerging field of Explainable Artificial Intelligence (XAI) has mainly focused on developing algorithms that generate such explanations, there has not yet been sufficient consideration of customers’ preferences for various types and formats of explanations. We discuss theoretically and study empirically people’s preferences for explanations of… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 85 REFERENCES
Denied by an (Unexplainable) Algorithm: Teleological Explanations for Algorithmic Decisions Enhance Customer Satisfaction
TLDR
This work studies consumer responses to goal-oriented, or “teleological,” explanations, which present the purpose or objective of the algorithm without revealing its mechanism, making them candidates for explaining decisions made by “unexplainable” algorithms. Expand
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach
TLDR
It is shown that features that have a large importance weight for a model prediction may not actually affect the corresponding decision, and importance weights are insufficient to communicate whether and how features influence system decisions. Expand
A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
TLDR
This study empirically compares the effectiveness and efficiency of these novel algorithms against a model-agnostic heuristic search algorithm for finding evidence counterfactuals using 13 behavioral and textual data sets and shows that different search methods have different strengths. Expand
Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
TLDR
Giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and morelikely to choose to use an algorithm to make subsequent forecasts. Expand
Making sense of recommendations
TLDR
It is found that recommender systems outperform humans, whether strangers, friends, or family, in a domain that affords humans many advantages: predicting which jokes people will find funny. Expand
Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening
Where should better learning technology improve decisions? I develop a formal model of decision-making in which better learning technology is complementary with experimentation. Noisy, inconsistentExpand
Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR
TLDR
It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system. Expand
When Does Retargeting Work? Information Specificity in Online Advertising
TLDR
Interestingly, the data suggest that dynamic retargeted ads are, on average, less effective than their generic equivalents, but when consumers exhibit browsing behavior that suggests their product preferences have evolved (e.g., visiting review websites), dynamic retTargeted ads no longer underperform. Expand
"That's (not) the output I expected!" On the role of end user expectations in creating explanations of AI systems
TLDR
It is found that factual explanations are indeed appropriate when expectations and output match, and neither factual nor counterfactual explanations appear appropriate, which suggests that explanation-generating systems may need to identify such end user expectations. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
...
1
2
3
4
5
...