explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning

@article{Spinner2020explAInerAV,
  title={explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning},
  author={Thilo Spinner and Udo Schlegel and Hanna Sch{\"a}fer and Mennatallah El-Assady},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2020},
  volume={26},
  pages={1064-1074}
}
We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. [...] Key Method To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed…Expand
ExplainExplore: Visual Exploration of Machine Learning Explanations
TLDR
This work introduces EXPLAINEXPLORE: an interactive explanation system to explore explanations that fit the subjective preference of data scientists, and leverages the domain knowledge of the data scientist to find optimal parameter settings and instance perturbations. Expand
A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes
TLDR
This paper presents a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks, and establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when trainingDeep neural networks. Expand
Interpretable Visualizations of Deep Neural Networks for Domain Generation Algorithm Detection
TLDR
This work presents a visual analytics system that provides designers of deep learning models for the classification of domain generation algorithms with understandable interpretations of their model, and cluster the activations of the model’s nodes and leverage decision trees to explain these clusters. Expand
ModelSpeX: Model Specification Using Explainable Artificial Intelligence Methods
TLDR
ModelSpeX is proposed, a visual analytics workflow to interactively extract human-centered rule-sets to generate model specifications from black-box models (e.g., neural networks) that enables to reason about the underlying problem, to extract decision rule sets, and to evaluate the suitability of the model for a particular task. Expand
Marcelle: Composing Interactive Machine Learning Workflows and Interfaces
TLDR
An architectural model for toolkits dedicated to the design of human interactions with machine learning is presented, built upon a modular collection of interactive components that can be composed to build interactive machine learning workflows, using reactive pipelines and composable user interfaces. Expand
A Survey of Visual Analytics Techniques for Machine Learning
TLDR
This work systematically review 259 papers published in the last ten years together with representative works before 2010 to better identify which research topics are promising and how to apply relevant techniques in visual analytics. Expand
DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models
TLDR
DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets, supports exploratory analysis of model decisions by combining the strengths of counterfactual explanations at instance- and subgroup-levels. Expand
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
TLDR
This paper eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples stakeholders’ knowledge from their interpretability needs and distill a hierarchical typology of stakeholder needs. Expand
Notions of explainability and evaluation approaches for explainable artificial intelligence
TLDR
This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods. Expand
XplaiNLI: Explainable Natural Language Inference through Visual Analytics
TLDR
XplaiNLI is proposed, an eXplainable, interactive, visualization interface that computes NLI with different methods and provides explanations for the decisions made by the different approaches. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 92 REFERENCES
A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations
TLDR
A visual analytics workflow to help data scientists and domain experts explore, diagnose, and understand the decisions made by a binary classifier that leverages “instance-level explanations”, measures of local feature relevance that explain single instances, and uses them to build a set of visual representations that guide the users in their investigation. Expand
Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models
TLDR
Manifold is presented, a framework that utilizes visual analysis techniques to support interpretation, debugging, and comparison of machine learning models in a more transparent and interactive manner and is designed as a generic framework. Expand
Going beyond Visualization. Verbalization as Complementary Medium to Explain Machine Learning Models
In this position paper, we argue that a combination of visualization and verbalization techniques is beneficial for creating broad and versatile insights into the structure and decision-makingExpand
Towards better analysis of machine learning models: A visual analytics perspective
TLDR
This paper presents a comprehensive analysis and interpretation of interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization with a focus on big data analytics. Expand
Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models
TLDR
The design and implementation of an interactive visual analytics system, Prospector, that provides interactive partial dependence diagnostics and support for localized inspection allows data scientists to understand how and why specific datapoints are predicted as they are. Expand
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
TLDR
Overall, users find the TensorFlow Graph Visualizer useful for understanding, debugging, and sharing the structures of their models. Expand
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
TLDR
A survey of the role of visual analytics in deep learning research is presented, which highlights its short yet impactful history and thoroughly summarizes the state-of-the-art using a human-centered interrogative framework, focusing on the Five W's and How. Expand
Visual Analytics for Topic Model Optimization based on User-Steerable Speculative Execution
TLDR
An explainable, mixed-initiative topic modeling framework that integrates speculative execution into the algorithmic decision-making process and visualizes the model-space of the novel incremental hierarchical topic modeling algorithm, unveiling its inner-workings. Expand
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records
TLDR
This study designs a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers, and demonstrates how it made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. Expand
RuleMatrix: Visualizing and Understanding Classifiers with Rules
TLDR
RuleMatrix is designed, a matrix-based visualization of rules to help users navigate and verify the rules and the black-box model, and is evaluated via two use cases and a usability study. Expand
...
1
2
3
4
5
...