Counterfactual Shapley Additive Explanations
@article{Albini2021CounterfactualSA, title={Counterfactual Shapley Additive Explanations}, author={Emanuele Albini and Jason Long and Danial Dervovic and Daniele Magazzeni}, journal={2022 ACM Conference on Fairness, Accountability, and Transparency}, year={2021} }
Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model. In the actionable recourse setting, wherein the goal of the explanations is to improve outcomes for model consumers, it is often unclear how feature attributions should be correctly used. With this work, we aim to strengthen and clarify the link between actionable recourse and feature attributions. Concretely, we propose a variant ofβ¦Β
Figures and Tables from this paper
11 Citations
FairShap: A Data Re-weighting Approach for Algorithmic Fairness based on Shapley Values
- Computer ScienceArXiv
- 2023
The proposed FairShap method is based on the Shapley Value, a well-known mathematical framework from game theory to achieve a fair allocation of resources, and is easily interpretable, as it measures the contribution of each training data point to a predefined fairness metric.
Robust Counterfactual Explanations for Tree-Based Ensembles
- Computer ScienceICML
- 2022
The results demonstrate that the proposed strategy RobX generates counterfactuals that are significantly more robust (nearly 100% validity after actual model changes) and also realistic (in terms of local outlier factor) over existing state-of-the-art methods.
Local and Global Explainability Metrics for Machine Learning Predictions
- Computer ScienceArXiv
- 2023
Novel quantitative metrics frameworks for interpreting the predictions of classifier and regressor models and can provide a more comprehensive understanding of model predictions and facilitate better communication between decision-makers and stakeholders, thereby increasing the overall transparency and accountability of AI systems.
The Inadequacy of Shapley Values for Explainability
- EconomicsArXiv
- 2023
This paper develops a rigorous argument for why the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features forβ¦
Efficient XAI Techniques: A Taxonomic Survey
- Computer ScienceArXiv
- 2023
This paper categorizes existing techniques of XAI acceleration into efficient non-amortized and efficient amortized methods, and summarizes the challenges of deploying XAIceleration methods to real-world scenarios, overcoming the trade-off between faithfulness and efficiency, and the selection of different acceleration methods.
From Shapley Values to Generalized Additive Models and back
- EconomicsArXiv
- 2022
In explainable machine learning, local post-hoc explanation algorithms and inherently interpretable models are often seen as competing approaches. This work offers a partial reconciliation betweenβ¦
On the Trade-Off between Actionable Explanations and the Right to be Forgotten
- LawArXiv
- 2022
of state-of-the-art the of linear models and overparameterized the of neural tangent kernels (NTKs) β we suggest to identify a of to maximize the of recourses. our we the
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
- Computer ScienceArXiv
- 2022
This work aims to show how varying perturbations and adding simple guardrails can help to avoid potentially flawed conclusions, how treatment of categorical variables is an important consideration in both post-hoc explainability and ablation studies, and how to identify useful baselines for XAI methods and viable perturbated studies.
Explainable AI: Foundations, Applications, Opportunities for Data Management Research
- Computer Science2022 IEEE 38th International Conference on Data Engineering (ICDE)
- 2022
This tutorial will present these novel explanation approaches for explainable artificial intelligence (XAI), characterize their strengths and limitations, and enumerate opportunities for data management research in the context of XAI.
Deletion and Insertion Tests in Regression Models
- Computer ScienceArXiv
- 2022
It is shown that sorting variables by their Shapley value does not necessarily give the optimal ordering for an insertion-deletion test and will however do that for monotone functions of additive models, such as logistic regression.
References
SHOWING 1-10 OF 82 REFERENCES
Text Counterfactuals via Latent Optimization and Shapley-Guided Search
- Computer ScienceEMNLP
- 2021
Ablation studies show that both latent optimization and the use of Shapley values improve success rate and the quality of the generated counterfactuals.
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
- Computer ScienceNeurIPS Datasets and Benchmarks
- 2021
CARLA is presented, a python library for benchmarking counterfactual explanation methods across both different data sets and different machine learning models, and a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods.
Counterfactual Explanations for Arbitrary Regression Models
- Computer ScienceArXiv
- 2021
This work forms CFE search for regression models in a rigorous mathematical framework using differentiable potentials, which resolves robustness issues in threshold-based objectives and proves that in this framework, verifying the existence of counterfactuals is NP-complete and finding instances using such potentials is CLS-complete.
Rational Shapley Values
- Computer ScienceFAccT
- 2022
Rational Shapley values is introduced, a novel XAI method that synthesizes and extends these seemingly incompatible approaches in a rigorous, flexible manner and compares favorably to state of the art XAI tools in a range of quantitative and qualitative comparisons.
FIMAP: Feature Importance by Minimal Adversarial Perturbation
- Computer ScienceAAAI
- 2021
This work presents Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanations, and extends the approach to categorical features using a partitioned Gumbel layer and demonstrates its efficacy on standard datasets.
Argumentative XAI: A Survey
- Computer ScienceIJCAI
- 2021
This survey overviews the literature focusing on different types of explanation, different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use, and lays out a roadmap for future work.
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
- Computer ScienceIJCAI
- 2021
Five key deficits in the evaluation of these methods are detailed and a roadmap, with standardised benchmark evaluations, is proposed to resolve the issues arising; issues that currently effectively block scientific progress in this field.
A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence
- Computer ScienceIEEE Access
- 2021
This work conducts a systematic literature review which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study and defines a taxonomy regarding both theoretical and practical approaches to contrastive and counterfactual explanation.