Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making

@article{Zytek2021SibylUA,
  title={Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making},
  author={Alexandra Zytek and Dongyu Liu and Rhema Vaithianathan and Kalyan Veeramachaneni},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2021},
  volume={PP},
  pages={1-1}
}
Machine learning (ML) is being applied to a diverse and ever-growing set of domains. In many cases, domain experts – who often have no expertise in ML or data science – are asked to use ML predictions to make high-stakes decisions. Multiple ML usability challenges can appear as result, such as lack of user trust in the model, inability to reconcile human-ML disagreement, and ethical concerns about oversimplification of complex problems to a single algorithm output. In this paper, we investigate… 

The Need for Interpretable Features: Motivation and Taxonomy

It is claimed that the term “interpretable feature” is not specific nor detailed enough to capture the full extent to which features impact the usefulness of ML explanations, and a formal taxonomy is needed of the feature properties that may be required by domain experts taking real-world actions.

MTV: Visual Analytics for Detecting, Investigating, and Annotating Anomalies in Multivariate Time Series

This paper investigates current practices used to detect and investigate anomalies in time series data in industrial contexts and identifies corresponding needs and introduces MTV, a visual analytics system to support such workflow.

Machine Learning in Transaction Monitoring: The Prospect of xAI

This study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM, and suggests a use case-specific approach for xAI to adequately foster the adoption of ML in TM.

Visualization Guidelines for Model Performance Communication Between Data Scientists and Subject Matter Experts

A set of communication guidelines and recommended visualizations for communicating model performance based on interviews of both data scientists and subject matter experts at the same organization are derived.

How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

A heuristic map which matches human cognitive biases with explainability techniques from the XAI literature, structured around XAI-aided decision-making, to structure directions for future XAI systems to better align with people's cognitive processes is presented.

Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support

AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts. As public sector agencies begin to adopt ADS, it is critical that we

“Why Do I Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts

Findings from design interviews with 12 social workers who use an algorithmic decision support tool (ADS) to assist their day-to-day child maltreatment screening decisions suggest how ADS may be better designed to support the roles of human decision-makers in social decision-making contexts.

Towards a Learner-Centered Explainable AI: Lessons from the learning sciences

Drawing upon approaches and theo-ries from the learning sciences, a framework for the learner-centered design and evaluation of XAI systems is proposed.

Devising a Usability Development Life Cycle (UDLC) Model for Enhancing Usability and User Experience in Interactive Applications

This study aims to propose a systematic and comprehensive maturity model, i.e., Usability Development Life Cycle (UDLC) model, and demonstrates enhanced usability with improved user satisfaction by applying on poor website and mobile application with weak usability and bad user experience.

Urban-regional disparities in mental health signals in Australia during the COVID-19 pandemic: a study via Twitter data and machine learning models

This study establishes a novel empirical framework using machine learning techniques to measure the urban-regional disparity of the public’s mental health signals in Australia during the pandemic,

References

SHOWING 1-10 OF 26 REFERENCES

Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models

This investigation investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability, and showed that interpretability is not a monolithic concept.

RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records

This study designs a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers, and demonstrates how it made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity.

explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning

We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI

Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models

The design and implementation of an interactive visual analytics system, Prospector, that provides interactive partial dependence diagnostics and support for localized inspection allows data scientists to understand how and why specific datapoints are predicted as they are.

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models

ActiVis is developed, deployed, and iteratively improved, an interactive visualization system for interpreting large-scale deep learning models and results and can explore complex deep neural network models at both the instance-and subset-level.

Human-Centred Machine Learning

A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally.

Towards A Rigorous Science of Interpretable Machine Learning

This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.

Designing Theory-Driven User-Centric Explainable AI

This paper proposes a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across philosophy and psychology, and identifies pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases.