Remote explainability faces the bouncer problem

@article{Merrer2020RemoteEF,
  title={Remote explainability faces the bouncer problem},
  author={Erwan Le Merrer and Gilles Tr{\'e}dan},
  journal={Nat. Mach. Intell.},
  year={2020},
  volume={2},
  pages={529-539}
}
The concept of explainability is envisioned to satisfy society’s demands for transparency about machine learning decisions. The concept is simple: like humans, algorithms should explain the rationale behind their decisions so that their fairness can be assessed. Although this approach is promising in a local context (for example, the model creator explains it during debugging at the time of training), we argue that this reasoning cannot simply be transposed to a remote context, where a model… 

ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI

TLDR
This work presents an approach, ProtoShotXAI, that uses a Prototypical few-shot network to explore the contrastive manifold between nonlinear features of different classes, the first locally interpretable XAI model that can be extended to, and demonstrated on, few- shot networks.

On Interactive Explanations as Non-Monotonic Reasoning

TLDR
This work treats explanations as objects that can be subject to reasoning and presents a formal model of the interactive scenario between user and system, via sequences of inputs, outputs, and explanations, suggesting a form of entailment, which, it is argued, should be thought of as non-monotonic.

Fooling Partial Dependence via Data Poisoning

TLDR
This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data and the first work performing attacks on variable dependence explanations.

Characterizing the risk of fairwashing

TLDR
This paper shows that fairwashed explanation models can generalize beyond the suing group, meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model and proposes an approach to quantify the risk of fairwashing based on the computation of the range of the unfairness of high-fidelity explainers.

Algorithmic audits of algorithms, and the law

TLDR
This paper focuses on external audits that are conducted by interacting with the user side of the target algorithm, hence considered as a black box, and articulate two canonical audit forms to law.

Machine learning partners in criminal networks

TLDR
Structural properties of political corruption, police intelligence, and money laundering networks can be used to recover missing criminal partnerships, distinguish among different types of criminal and legal associations, as well as predict the total amount of money exchanged among criminal agents, with outstanding accuracy.

Explainable Natural Language Processing

  • Anders Søgaard
  • Computer Science
    Synthesis Lectures on Human Language Technologies
  • 2021

References

SHOWING 1-10 OF 48 REFERENCES

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

Model Reconstruction from Model Explanations

TLDR
It is shown through theory and experiment that gradient-based explanations of a model quickly reveal the model itself, which highlights the power of gradients rather than labels as a learning primitive.

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

TLDR
The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise.

Explanation in Artificial Intelligence: Insights from the Social Sciences

Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?

TLDR
It is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.

A Unified Approach to Interpreting Model Predictions

TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

A Survey of Methods for Explaining Black Box Models

TLDR
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.

Explainable Recommendation: A Survey and New Perspectives

TLDR
This survey highlights the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why, and provides a two-dimensional taxonomy to classify existing explainable recommendations research.

Logics and practices of transparency and opacity in real-world applications of public sector machine learning

TLDR
This short paper distils insights about transparency on the ground from interviews with 27 actors, largely public servants and relevant contractors, across 5 OECD countries to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground.