PreCall: A Visual Interface for Threshold Optimization in ML Model Selection

@article{Kinkeldey2019PreCallAV,
  title={PreCall: A Visual Interface for Threshold Optimization in ML Model Selection},
  author={Christoph Kinkeldey and Claudia M{\"u}ller-Birn and Tom G{\"u}lenman and Jesse Josua Benjamin and Aaron L Halfaker},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.05131}
}
Machine learning systems are ubiquitous in various kinds of digital applications and have a huge impact on our everyday life. But a lack of explainability and interpretability of such systems hinders meaningful participation by people, especially by those without a technical background. Interactive visual interfaces (e.g., providing means for manipulating parameters in the user interface) can help tackle this challenge. In this position paper we present PreCall, an interactive visual interface… 

Figures from this paper

An alternative confusion matrix implementation for PreCall

This work examines literature on creating visualizations for the performance of machine learning classifiers, with the target group being users with limited machine learning experience and goes over the ORES API and its relevant endpoints and parameters.

Algorithmic Governance der Wikipedia

Mit der wachsenden Bedeutung algorithmischer Systeme in gesellschaftlichen Kontexten nimmt die Debatte daruber, wie sie gestaltet werden sollten, zu. Die Wikipedia-Community hat bereits vielfaltige

References

SHOWING 1-9 OF 9 REFERENCES

A Review of User Interface Design for Interactive Machine Learning

A structural and behavioural model of a generalised IML system is proposed and a solution principles for building effective interfaces for IML are identified, identified strands of user interface research key to unlocking more efficient and productive non-expert interactive machine learning applications.

ORES : Facilitating remediation of Wikipedia ’ s socio-technical problems

The theoretical mechanisms of social change ORES enables are discussed and case studies in participatory machine learning around ORES from the 3 years since its deployment are detail.

ModelTracker: Redesigning Performance Analysis Tools for Machine Learning

ModelTracker is presented, an interactive visualization that subsumes information contained in numerous traditional summary statistics and graphs while displaying example-level performance and enabling direct error examination and debugging.

Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

From a light scan of literature, it is demonstrated that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and some key results from these fields that are relevant to explainableAI are presented.

"Meaningful Information" and the Right to Explanation

There is no single, neat statutory provision labeled the “right to explanation” in Europe’s new General Data Protection Regulation (GDPR). But nor is such a right illusory. Responding to two

Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For

It is argued that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core "algorithmic war stories" that have shaped recent attitudes in this domain and it is feared that the search for a "right to an explanations" in theGDPR may be at best distracting, and at worst nurture a new kind of "transparency fallacy".

SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR

It is argued that a right to an explanation in the EU General Data Protection Regulation is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain.

Interactive optimization for steering machine classification

ManiMatrix is presented, a system that provides controls and visualizations that enable system builders to refine the behavior of classification systems in an intuitive manner and results show that users are able to quickly and effectively modify decision boundaries of classifiers to tai-lor thebehavior of classifier to problems at hand.

2017. Slave to the Algorithm? Why a ’Right to an Explanation

  • Is Probably Not the Remedy You Are Looking For. SSRN Scholarly Paper ID 2972855
  • 2017