• Corpus ID: 235743033

Counterfactual Explanations in Sequential Decision Making Under Uncertainty

@inproceedings{Tsirtsis2021CounterfactualEI,
  title={Counterfactual Explanations in Sequential Decision Making Under Uncertainty},
  author={Stratis Tsirtsis and Abir De and Manuel Gomez-Rodriguez},
  booktitle={NeurIPS},
  year={2021}
}
Methods to find counterfactual explanations have predominantly focused on onestep decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we… 

Figures from this paper

Counterfactual Temporal Point Processes
TLDR
A causal model of thinning for temporal point processes that builds upon the Gumbel-Max structural causal model and a sampling algorithm to simulate counterfactual realizations of the temporal point process under a given alternative intensity function are developed.
Counterfactual Inference of Second Opinions
TLDR
A set invariant Gumbel-Max structural causal model is designed where the structure of the noise governing the sub-mechanisms underpinning the model depends on an intuitive notion of similarity between experts which can be estimated from data.
Actual Causality and Responsibility Attribution in Decentralized Partially Observable Markov Decision Processes
Actual causality and a closely related concept of responsibility attribution are central to accountable decision making. Actual causality focuses on specific outcomes and aims to identify decisions
Counterfactual Analysis in Dynamic Models: Copulas and Bounds
TLDR
The entire space of SCMs obeying counterfactual stability (CS) is characterized, and it is used to negatively answer the open question of Oberst and Sontag regarding the uniqueness of the Gumbel-max mechanism for modeling CS.
Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
TLDR
This work proposes DIVerse CLUE (∇-CLUE), a set of CLUEs which each propose a distinct explanation as to how one can decrease the uncertainty associated with an input, and proposes GLobal AMortised CLUE, a distinct and novel method which learns amortised mappings on specific groups of uncertain inputs.
HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning
TLDR
This work proposes HEX, a humanin-the-loop deep reinforcement learning approach to MLX that incorporates 0-distrust projection to synthesize decider-specific explanation-providing policies from any arbitrary classification model and is constructed to operate in limited or reduced training data scenarios.

References

SHOWING 1-10 OF 41 REFERENCES
Decisions, Counterfactual Explanations and Strategic Behavior
TLDR
This paper shows that, given a pre-defined policy, the problem of finding the optimal set of counterfactual explanations is NP-hard and shows that the corresponding objective is nondecreasing and satisfies submodularity and this allows a standard greedy algorithm to enjoy approximation guarantees.
Model-Agnostic Counterfactual Explanations for Consequential Decisions
TLDR
This work builds on standard theory and tools from formal verification and proposes a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae.
Algorithmic Recourse: from Counterfactual Explanations to Interventions
TLDR
This work relies on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse, and proposes a shift of paradigm from recourse via nearest counterfactUAL explanations to recourse through minimal interventions, shifting the focus from explanations to interventions.
Explaining machine learning classifiers through diverse counterfactual explanations
TLDR
This work proposes a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes, and provides metrics that enable comparison ofcounterfactual-based methods to other local explanation methods.
Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models
TLDR
An off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned policy is likely to have produced a substantially different outcome than the observed policy, and a class of structural causal models for generating counterfactual trajectories in finite partially observable Markov Decision Processes (POMDPs).
Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search
TLDR
The Counterfactually-Guided Policy Search (CF-GPS) algorithm is proposed, which leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes and can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions.
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach
TLDR
This work shows that it is impossible to guarantee recourse without access to the true structural equations, and proposes two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR
TLDR
It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.
A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence
TLDR
This work conducts a systematic literature review which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study and defines a taxonomy regarding both theoretical and practical approaches to contrastive and counterfactual explanation.
Explainable Reinforcement Learning Through a Causal Lens
TLDR
This paper presents an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest and shows that causal model explanations perform better on these measures compared to two other baseline explanation models.
...
...