Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

  title={Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals},
  author={Sainyam Galhotra and Romila Pradhan and Babak Salimi},
  journal={Proceedings of the 2021 International Conference on Management of Data},
There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box… 

Minun: evaluating counterfactual explanations for entity matching

Minun, a model-agnostic method to generate explanations for EM solutions that significantly outperforms popular explainable AI methods such as LIME and SHAP on both explanation quality and scalability is proposed.

Identification and Estimation of Joint Probabilities of Potential Outcomes in Observational Studies with Covariate Information

The joint probabilities of potential outcomes are fundamental components of causal inference in the sense that (i) if they are identifiable, then the causal risk is also identifiable, but not vise

Adjusting Machine Learning Decisions for Equal Opportunity and Counterfactual Fairness

  • Computer Science
  • 2022
Machine learning ( ml ) methods have the potential to automate high-stakes decisions, such as bail admissions or credit lending, by analyzing and learning from historical data. But these algorithmic

Contrastive Counterfactual Fairness in Algorithmic Decision-Making

This paper introduces the first probabilistic fairness-aware data augmentation approach that is based on contrastive counterfactual causality, and concludes that the proposed method has a promising ability to capture and mitigate unfairness in AI deployment.

XInsight: eXplainable Data Analysis Through The Lens of Causality

This study promotes for the first time a transparent and explicable perspective on data analysis, called eXplainable Data Analysis (XDA), which provides data analysis with qualitative and quantitative explanations of causal and non-causal semantics.

Explainability's Gain is Optimality's Loss?: How Explanations Bias Decision-making

These findings from a field experiment demonstrate empirically how feature-based explanations' semantics of causal models induce leakage from the decision-maker's prior beliefs, which can lead to sub-optimal and biased decision outcomes.

Combining Counterfactuals With Shapley Values To Explain Image Models

This work develops a pipeline to generate counterfactuals and uses it to estimate Shapley values, which are used to obtain contrastive and interpretable explanations with strong axiomatic guarantees.

Explaining Image Classifiers Using Contrastive Counterfactuals in Generative Latent Spaces

This work introduces a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers using pretrained generative models without any re-training or conditioning.

CFDB: Machine Learning Model Analysis via Databases of CounterFactuals

CFDB is proposed, a unified framework for querying Counterfactuals that allows to consolidate common approaches in CF-based analysis and to provide multiple levels of abstractions in a relational framework and is demonstrated in the context of the Lending Club Loan Data.



A survey of algorithmic recourse: definitions, formulations, solutions, and prospects

An extensive literature review is performed, and an overview of the prospective research directions towards which the community may engage is provided, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.

The philosophical basis of algorithmic recourse

It is argued that two essential components of a good life - temporally extended agency and trust - are underwritten by recourse, and a revised approach to recourse is suggested.

On Pearl’s Hierarchy and the Foundations of Causal Inference

This chapter develops a novel and comprehensive treatment of the Pearl Causal Hierarchy through two complementary lenses: one logical-probabilistic and another inferential-graphical, and investigates an inferential system known as do-calculus, showing how it can be suf­ ficient, and in many cases necessary, to allow inferences across the PCH’s layers.

Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End

An interpretation based on the actual causality framework is provided and how counterfactual examples can be used to evaluate the goodness of an attribution-based explanation in terms of its necessity and sufficiency is shown.

Counterfactual Explanations for Machine Learning: A Review

A rubric is designed with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric, providing easy comparison and comprehension of the advantages and disadvantages of different approaches.

Database Repair Meets Algorithmic Fairness

This paper formalizes the situation as a database repair problem, proving sufficient conditions for fair classifiers in terms of admissible variables as opposed to a complete causal model and using these conditions as the basis for database repair algorithms that provide provable fairness guarantees about classifiers trained on their training labels.

Towards Trustable Explainable AI

This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern and shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically.

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

This work shows that it is impossible to guarantee recourse without access to the true structural equations, and proposes two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.

Fair Data Integration

This work proposes an approach to identify a sub-collection of features that ensure the fairness of the dataset by performing conditional independence tests between different subsets of features and theoretically proves the correctness of the proposed algorithm.

Causal Relational Learning

A declarative language called CARL is proposed for capturing causal background knowledge and assumptions, and specifying causal queries using simple Datalog-like rules, which provides a foundation for inferring causality and reasoning about the effect of complex interventions in relational domains.