• Corpus ID: 244954506

What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods

@article{Fel2021WhatIC,
  title={What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods},
  author={Thomas Fel and Julien Colin and R'emi Cadene and Thomas Serre},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.04417}
}
A multitude of explainability methods has been described to try to help users better understand how modern AI systems make decisions. However, most performance metrics developed to evaluate these methods have remained largely theoretical – without much consideration for the human end-user. In particular, it is not yet clear (1) how useful current explainability methods are in real-world scenarios; and (2) whether current performance metrics accurately reflect the usefulness of explanation… 

A Human-Centric Assessment Framework for AI

Inspired by the Turing test, this work introduces a human-centric assessment framework where a leading domain expert accepts or rejects the solutions of an AI system and another domain expert, and whether the AI system’s explanations are human-understandable.

HIVE: Evaluating the Human Interpretability of Visual Explanations

HIVE (Human Interpretability of Visual Explanations), a novel human evaluation framework that assesses the utility of explanations to human users in AI-assisted decision making scenarios, and enables falsifiable hypothesis testing, cross-method comparison, and human-centered evaluation of visual interpretability methods is introduced.

CRAFT: Concept Recursive Activation FacTorization for Explainability

This work introduces 3 new ingredients to the automatic concept extraction literature: a recursive strategy to detect and decompose concepts across layers, a novel method for a more faithful estimation of concept importance using indices, and the use of implicit differentiation to unlock Concept Attribution Maps.

OCTET: Object-aware Counterfactual Explanations

This work en-codes the query image into a latent space that is structured in a way to ease object-level manipulations, inspired by recent generative modeling works, and shows that this method can be adapted beyond classification, e.g., to explain semantic segmentation models.

Harmonizing the object recognition strategies of deep neural networks with humans

The neural harmonizer is presented: a general-purpose training routine that both aligns DNN and human visual strategies and improves categorization accuracy and release the code and data at https://serre-lab.github.io/Harmonization to help the build more human-like DNNs.

Constructing Natural Language Explanations via Saliency Map Verbalization

The results suggest that saliency map verbalization makes explanations more under-standable and less cognitively challenging to humans than conventional heatmap visualization.

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

This work proposes two novel architectures of self-interpretable image classifiers that first explain, and then predict by harnessing the visual correspondences between a query image and exemplars, and shows that it is possible to achieve complementary human-AI team accuracy higher than either AI-alone or human-alone, in ImageNet and CUB image classification tasks.

The Role of Human Knowledge in Explainable AI

This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches.

Xplique: A Deep Learning Explainability Toolbox

Xplique is a software library for explainability which includes representative explainability methods as well as associated evaluation metrics and interfaces with one of the most popular learning libraries: Tensorflow as wellas other libraries including PyTorch, scikit-learn and Theano.

Human Interpretation of Saliency-based Explanation Over Text

It is found that people often mis-interpret the explanations: superficial and unrelated factors influence the explainees’ importance assignment despite the explanation communicating importance directly, and some of this distortion can be attenuated.

References

SHOWING 1-10 OF 101 REFERENCES

How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels

An investigation on whether or not showing machine-generated visual interpretations helps users understand the incorrectly predicted labels produced by image classifiers demonstrates that displaying the visual interpretations did not increase, but rather decreased, the average guessing accuracy by roughly 10%.

Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?

Human subject tests are carried out that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experimental factors.

RISE: Randomized Input Sampling for Explanation of Black-box Models

The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

ImageNet: A large-scale hierarchical image database

A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.

Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis

A novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices, and shows that the proposed method leads to favorable scores on standard benchmarks for vision (and language models) while drastically reducing the computing time compared to other black-box methods.

Understanding Deep Networks via Extremal Perturbations and Smooth Masks

Some of the shortcomings of existing approaches to perturbation analysis are discussed and the concept of extremal perturbations are introduced, which are theoretically grounded and interpretable and allow us to remove all tunable weighing factors from the optimization problem.

Comparing Automatic and Human Evaluation of Local Explanations for Text Classification

A variety of local explanation approaches using automatic measures based on word deletion are evaluated, showing that an evaluation using a crowdsourcing experiment correlates moderately with these automatic measures and that a variety of other factors also impact the human judgements.

Comparing individual means in the analysis of variance.

The practitioner of the analysis of variance often wants to draw as many conclusions as are reasonable about the relation of the true means for individual "treatments," and a statement by the F-test

Interpretable Explanations of Black Boxes by Meaningful Perturbation

A general framework for learning different kinds of explanations for any black box algorithm is proposed and the framework to find the part of an image most responsible for a classifier decision is specialised.
...