Towards Better User Studies in Computer Graphics and Vision

  title={Towards Better User Studies in Computer Graphics and Vision},
  author={Zoya Bylinskii and Laura Mariah Herman and Aaron Hertzmann and Stefanie Hutka and Yile Zhang},
Online crowdsourcing platforms make it easy to perform evaluations of algorithm outputs with surveys that ask questions like “which image is better, A or B?”) The proliferation of these “user studies” in vision and graphics research papers has led to an increase of hastily conducted studies that are sloppy and uninformative at best, and potentially harmful and misleading. We argue that more attention needs to be paid to both the design and reporting of user studies in computer vision and… 
1 Citations

Figures and Tables from this paper

HIVE: Evaluating the Human Interpretability of Visual Explanations

HIVE (Human Interpretability of Visual Explanations), a novel human evaluation framework that assesses the utility of explanations to human users in AI-assisted decision making scenarios, and enables falsifiable hypothesis testing, cross-method comparison, and human-centered evaluation of visual interpretability methods is introduced.



Usability evaluation considered harmful (some of the time)

Current practice in Human Computer Interaction as encouraged by educational institutes, academic review processes, and institutions with usability groups advocate usability evaluation as a critical

In the eye of the beholder: A viewer-defined conception of online visual creativity

Despite substantial interest in developing theoretical models and technology for creativity enhancement, existing creativity research across various fields lacks a user-centered definition of

A Primitive for Manual Hatching

In art, hatching means drawing patterns of roughly parallel lines. Even with skill and time, an artist can find these patterns difficult to create and edit. Our new artistic primitive—the hatching

The effect of shape and illumination on material perception: model and applications

the effects of illumination and geometry in material perception across such a large collection of varied appearances. We connect our findings to those of the literature, discussing how previous

[MOBI] Interviewing Users How To Uncover Compelling Insights

  • Education
  • 2021
Thank you very much for downloading interviewing users how to uncover compelling insights. Maybe you have knowledge that, people have search hundreds times for their favorite readings like this

The Benchmark Lottery

The notion of a benchmark lottery that describes the overall fragility of the ML benchmarking process is proposed and it is argued that this might lead to biased progress in the community.

The Role of AI Attribution Knowledge in the Evaluation of Artwork

Artwork is increasingly being created by machines through algorithms with little or no input from humans. Yet, very little is known about people’s attitudes and evaluations of artwork generated by

Countering Racial Bias in Computer Graphics Research

A variety of improvements to quantitative measures and qualitative practices are proposed, and pose novel, open research problems, to broaden the research horizons to encompass all of humanity.

“This is a Problem, Don’t You Agree?” Framing and Bias in Human Evaluation for Natural Language Generation

Despite recent efforts reviewing current human evaluation practices for natural language generation (NLG) research, the lack of reported question wording and potential for framing effects or