HypoML: Visual Analysis for Hypothesis-based Evaluation of Machine Learning Models

@article{Wang2021HypoMLVA,
  title={HypoML: Visual Analysis for Hypothesis-based Evaluation of Machine Learning Models},
  author={Qianwen Wang and William Alexander and John Pegg and Huamin Qu and Min Chen},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2021},
  volume={27},
  pages={1417-1426}
}
In this paper, we present a visual analytics tool for enabling hypothesis-based evaluation of machine learning (ML) models. We describe a novel ML-testing framework that combines the traditional statistical hypothesis testing (commonly used in empirical research) with logical reasoning about the conclusions of multiple hypotheses. The framework defines a controlled configuration for testing a number of hypotheses as to whether and how some extra information about a “concept” or “feature” may… 

Figures and Tables from this paper

A Grammar for Hypothesis-Driven Visual Analysis

A novel grammar to express hypothesis-based analytic questions for visual analysis is proposed and can reformulate abstract classes of visual analysis goals, such as analytic and data-related tasks, in a way that is suitable for analysis and automation.

Trinary Tools for Continuously Valued Binary Classifiers

Why? Why not? When? Visual Explanations of Agent Behaviour in Reinforcement Learning

PolicyExplainer, a visual analytics interface which lets the user directly query an autonomous agent, is introduced and it is found that PolicyExplainer's visual approach promotes trust and understanding of agent decisions better than a state-of-the-art text-based explanation approach.

VSumVis: Interactive Visual Understanding and Diagnosis of Video Summarization Model

This research presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging and cataloging video clips to provide real-time insights into their actors and scenes.

References

SHOWING 1-10 OF 45 REFERENCES

ColorNet: Investigating the importance of color spaces for image classification

This model essentially takes an RGB image as input, simultaneously converts the image into 7 different color spaces and uses these as inputs to individual densenets to reduce computation overhead and number of hyperparameters required.

Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models

The design and implementation of an interactive visual analytics system, Prospector, that provides interactive partial dependence diagnostics and support for localized inspection allows data scientists to understand how and why specific datapoints are predicted as they are.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

YOLOv4: Optimal Speed and Accuracy of Object Detection

This work uses new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, C mBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100.

The What-If Tool: Interactive Probing of Machine Learning Models

The What-If Tool is an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding, and lets practitioners measure systems according to multiple ML fairness metrics.

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural

Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models

This investigation investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability, and showed that interpretability is not a monolithic concept.

Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods

The approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory, providing a guideline for graph construction.

VIS4ML: An Ontology for Visual Analytics Assisted Machine Learning

This paper reinterprets the traditional VA pipeline to encompass model-development workflows and introduces necessary definitions, rules, syntaxes, and visual notations for formulating VIS4ML and makes use of semantic web technologies for implementing it in the Web Ontology Language (OWL).