• Corpus ID: 242756892

Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

  title={Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment},
  author={Donghoon Shin and Sachin Grover and Kenneth Holstein and Adam Perer},
Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable. However, most existing XAI techniques are not informed by the understandings of task-specific needs of humans for explanations. Thus, we took a first step toward understanding what forms of XAI humans require in damage detection tasks. We… 

Figures from this paper


A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
A framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams is developed and summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research are provided.
CrowdLearn: A Crowd-AI Hybrid System for Deep Learning-based Damage Assessment Applications
The CrowdLearn is proposed, a crowd-AI hybrid system that leverages the crowdsourcing platform to troubleshoot, tune, and eventually improve the black-box AI algorithms by welding crowd intelligence with machine intelligence for deep learning-based damage assessment applications.
Explanation in Artificial Intelligence: Insights from the Social Sciences
Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery
xBD provides pre- and post-event multi-band satellite imagery from a variety of disaster events with building polygons, classification labels for damage types, ordinal labels of damage level, and corresponding satellite metadata, and will be the largest building damage assessment dataset to date.
PulseSatellite: A tool using human-AI feedback loops for satellite image analysis in humanitarian contexts
PulseSatellite is presented, a collaborative satellite image analysis tool which leverages neural network models that can be retrained on-the fly and adapted to specific humanitarian contexts and geographies.
Grounded Theory: An Exploration of Process and Procedure
The authors pursue a more in-depth discussion of the positions of Glaser, using Glaser's work, and Strauss, using Strauss's and Strauss and Corbin's (1990) work, regarding the different phases of data analysis, specifically addressing the coding procedures, verification, and the issue of forcing versus emergence.
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion
  • 2020
xView2 first place, 2019
  • URL https://github.com/DIUx-xView/xView2_first_ place
  • 2019