• Corpus ID: 240288839

One Explanation is Not Enough: Structured Attention Graphs for Image Classification

@inproceedings{Shitole2021OneEI,
  title={One Explanation is Not Enough: Structured Attention Graphs for Image Classification},
  author={Vivswan Shitole and Li Fuxin and Minsuk Kahng and Prasad Tadepalli and Alan Fern},
  booktitle={Neural Information Processing Systems},
  year={2021}
}
Saliency maps are popular tools for explaining the decisions of convolutional neural networks (CNNs) for image classification. Typically, for each image of interest, a single saliency map is produced, which assigns weights to pixels based on their importance to the classification. We argue that a single saliency map provides an incomplete understanding since there are often many other maps that can explain a classification equally well. In this paper, we propose to utilize a beam search… 

Care for the Mind Amid Chronic Diseases: An Interpretable AI Approach Using IoT

An algorithmic solution for impactful social good — collaborative care of chronic diseases and depression in health sensing with a novel interpretable deep learning model for depression prediction from sensor data.

A Survey of Computer Vision Technologies In Urban and Controlled-environment Agriculture

The objective of this paper is to familiarize CV researchers with agricultural applications and agricultural practitioners with the solutions offered by CV, and identify major CV applications in CEA, as well as survey the state of the art in 68 technical papers using deep learning methods.

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

A study of a real-world AI application via interviews with 20 end-users of Merlin, a bird-identification app, finds that people express a need for practically useful information that can improve their collaboration with the AI system and intend to use XAI explanations for calibrating trust, improving their task skills, changing their behavior to supply better inputs to theAI system, and giving constructive feedback to developers.

Machine learning for analysis of real nuclear plant data in the frequency domain Annals of Nuclear Energy

A domain adaptation methodology is subsequently developed to extend the sim- ulated setting to real plant measurements, which uses self-supervised, or unsupervised learning, to align the simulated data with the actual plant data and detect perturbations, whilst classifying their type and estimating their location.

HIVE: Evaluating the Human Interpretability of Visual Explanations

HIVE (Human Interpretability of Visual Explanations), a novel human evaluation framework that assesses the utility of explanations to human users in AI-assisted decision making scenarios, and enables falsifiable hypothesis testing, cross-method comparison, and human-centered evaluation of visual interpretability methods is introduced.

References

SHOWING 1-10 OF 38 REFERENCES

Visualizing Deep Networks by Optimizing with Integrated Gradients

I-GOS is proposed, which optimizes for a heatmap so that the classification scores on the masked image would maximally decrease and is to compute descent directions based on the integrated gradients instead of the normal gradient, which avoids local optima and speeds up convergence.

Anchors: High-Precision Model-Agnostic Explanations

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

ImageNet: A large-scale hierarchical image database

A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.

Minimal Sufficient Explanations for Factored Markov Decision Processes

A technique to explain policies for factored MDP by populating a set of domain-independent templates and a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy.

A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems

A framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams is developed and summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research are provided.

FACE: Feasible and Actionable Counterfactual Explanations

A new line of Counterfactual Explanations research is proposed aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal, based on the shortest path distances defined via density-weighted metrics.

Compiling Neural Networks into Tractable Boolean Circuits

This work shows how to reduce a neural network over binary inputs and step activation functions into a Boolean circuit, then compile this Boolean circuit into a tractable one (a core problem in the domain of knowledge compilation).

Nonparametric Statistics in Human–Computer Interaction

This chapter organizes and illustrates multiple nonparametric procedures, contrasting them with their parametric counterparts, and Guidance is given for when to useNonparametric analyses and how to interpret and report their results.