Corpus ID: 195833808

Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models

@article{Weerts2019CaseBasedRF,
  title={Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models},
  author={Hilde J. P. Weerts and Werner van Ipenburg and Mykola Pechenizkiy},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.03334}
}
In many contexts, it can be useful for domain experts to understand to what extent predictions made by a machine learning model can be trusted. In particular, estimates of trustworthiness can be useful for fraud analysts who process machine learning-generated alerts of fraudulent transactions. In this work, we present a case-based reasoning (CBR) approach that provides evidence on the trustworthiness of a prediction in the form of a visualization of similar previous instances. Different from… Expand
The accuracy versus interpretability trade-off in fraud detection model
Abstract Like a hydra, fraudsters adapt and circumvent increasingly sophisticated barriers erected by public or private institutions. Among these institutions, banks must quickly take measures toExpand
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
TLDR
This study conducts an experiment following XAI Test to evaluate three popular post-hoc explanation methods – LIME, SHAP, and TreeInterpreter – on a real-world fraud detection task, with real data, a deployed ML model, and fraud analysts. Expand
Explainable Sentiment Analysis Application for Social Media Crisis Management in Retail
TLDR
This study develops an Explainable Sentiment Analysis (XSA) application for Twitter data, and proposes research propositions focused on evaluating such application in a hypothetical crisis management scenario, and illustrates the XSA application can be effective in providing the most important words addressing customers sentiment out of individual tweets. Expand
How can I choose an explainer?: An Application-grounded Evaluation of Post-hoc Explanations
TLDR
This study aims to bridge the gap by proposing XAI Test, an application-grounded evaluation methodology tailored to isolate the impact of providing the end-user with different levels of information, and shows that popular XAI methods have a worse impact than desired. Expand

References

SHOWING 1-10 OF 26 REFERENCES
A Case-Based Explanation System for Black-Box Systems
TLDR
This paper presents a Case-Based Reasoning (CBR) solution to providing supporting explanations of black-box systems that uses local information to assess the importance of each feature and takes advantage of the derived feature importance information to help select cases that are a better reflection of the black- box solution and thus more convincing explanations. Expand
A Human-Grounded Evaluation of SHAP for Alert Processing
TLDR
The results suggest that the SHAP explanations do impact the decision-making process, although the model's confidence score remains to be a leading source of evidence. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
To Trust Or Not To Trust A Classifier
TLDR
This work proposes a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example, and shows empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifiers' confidence score as well as many other baselines. Expand
Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid
  • R. Kohavi
  • Mathematics, Computer Science
  • KDD
  • 1996
TLDR
A new algorithm, NBTree, is proposed, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-Tree nodes contain univariate splits as regular decision-trees, but the leaves contain Naïve-Bayesian classifiers. Expand
Explanation in Case-Based Reasoning–Perspectives and Goals
TLDR
A framework for explanation in case-based reasoning (CBR) based on explanation goals is presented, and ways that the goals of the user and system designer should be taken into account when deciding what is a good explanation for a given CBR system are proposed. Expand
Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR
TLDR
It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system. Expand
An Overview of Concept Drift Applications
TLDR
This chapter provides an application oriented view towards concept drift research, with a focus on supervised learning tasks, and constructs a reference framework for positioning application tasks within a spectrum of problems related to concept drift. Expand
An introduction to case-based reasoning
  • J. Kolodner
  • Computer Science
  • Artificial Intelligence Review
  • 2004
TLDR
This paper discusses the processes involved in case-based reasoning and the tasks for which case- based reasoning is useful. Expand
Anchors: High-Precision Model-Agnostic Explanations
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions. WeExpand
...
1
2
3
...