Generating personalized counterfactual interventions for algorithmic recourse by eliciting user preferences

@article{Toni2022GeneratingPC,
  title={Generating personalized counterfactual interventions for algorithmic recourse by eliciting user preferences},
  author={G. D. Toni and Paolo Viappiani and Bruno Lepri and Andrea Passerini},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.13743}
}
Counterfactual interventions are a powerful tool to explain the decisions of a blackbox decision process, and to enable algorithmic recourse. They are a sequence of actions that, if performed by a user, can overturn an unfavourable decision made by an automated decision system. However, most of the current methods provide interventions without considering the user’s preferences. For example, a user might prefer doing certain actions with respect to others. In this work, we present the first… 

Figures from this paper

Leveraging Explanations in Interactive Machine Learning: An Overview
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML

References

SHOWING 1-10 OF 42 REFERENCES
Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis
TLDR
This paper learns a program that outputs a sequence of explainable counterfactual actions given a user description and a causal graph and leverages program synthesis techniques, reinforcement learning coupled with Monte Carlo Tree Search for efficient exploration, and rule learning to extract explanations for each recommended action.
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach
TLDR
This work shows that it is impossible to guarantee recourse without access to the true structural equations, and proposes two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
Algorithmic Recourse: from Counterfactual Explanations to Interventions
TLDR
This work relies on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse, and proposes a shift of paradigm from recourse via nearest counterfactUAL explanations to recourse through minimal interventions, shifting the focus from explanations to interventions.
Model-Agnostic Counterfactual Explanations for Consequential Decisions
TLDR
This work builds on standard theory and tools from formal verification and proposes a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae.
Making Rational Decisions Using Adaptive Utility Elicitation
TLDR
An algorithm is proposed that interleaves the analysis of the decision problem and utility elicitation to allow these two tasks to inform each other and computes the best strategy based on the information acquired so far.
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
TLDR
An extensive literature review is performed, and an overview of the prospective research directions towards which the community may engage is provided, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.
Consequence-aware Sequential Counterfactual Generation
TLDR
This work forms the task as a multi-objective optimization problem and presents a genetic algorithm approach to find optimal sequences of actions leading to the counterfactuals, and proposes a model-agnostic method for sequentialcounterfactual generation.
Optimal Bayesian Recommendation Sets and Myopically Optimal Choice Query Sets
TLDR
This paper examines EVOI optimization using choice queries, queries in which a user is ask to select her most preferred product from a set, and shows that, under very general assumptions, the optimal choice query w.r.t.
...
...