Understanding Public Evaluation: Quantifying Experimenter Intervention

  title={Understanding Public Evaluation: Quantifying Experimenter Intervention},
  author={Julie Rico Williamson and John Williamson},
  journal={Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems},
Public evaluations are popular because some research questions can only be answered by turning "to the wild." Different approaches place experimenters in different roles during deployment, which has implications for the kinds of data that can be collected and the potential bias introduced by the experimenter. This paper expands our understanding of how experimenter roles impact public evaluations and provides an empirical basis to consider different evaluation approaches. We completed an… 

Figures from this paper

Research in the Wild via Performance: Challenges, Ethics and Opportunities
This half-day workshop seeks to bring together designers who use live performance in Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee.
Designing Socially Acceptable Hand-to-Face Input
An elicitation study conducted in a busy public space in which pairs of users were asked to generate unobtrusive, socially acceptable hand-to-face input actions is described and five design strategies are described: miniaturizing, obfuscating, screening, camouflaging and re-purposing.
Design Strategies for Overcoming Failures on Public Interactive Displays
The types of failures that can arise on public interactive displays (PIDs) and how redundancy measures can be designed to ensure that a PID remains relevant even when a failure occurs are investigated.
Counterfactual Thinking: What Theories Do in Design
ABSTRACT This essay addresses a foundational topic in applied sciences with interest in design: how do theories inform design? Previous work has attributed theory-use to abduction and deduction.
Does the Public Still Look at Public Displays?
Identifying how user engagement with public displays has changed over the past 10 years and understanding how the pervasiveness of smartphones and other connected devices has modified whether users notice public displays and their interactions with public display is outlined.
Evaluation Framework for Public Interactive Installations
The aim of this is to outline the most generally used methods and metrics through which these types of implementations are evaluated and create a framework that will allow this researchers of this blossoming field to more easily prepare and specify an evaluation for the artifact in question.
Tangible Objects for Reminiscing in Dementia Care
This work conducts contextual inquiries over a week to learn how 80 people with varying stages of dementia reminisce throughout the day and presents resulting needs and three tangible prototypes designed to facilitate reminiscence.
Face-the-Waste - Learning about Food Waste through a Serious Game
A serious game called Face-the-Waste that is meant to increase users food literacy and educate them about the impact and development of food waste and demonstrated that such provocations can add a new layer for the design of serious games.
Outside Where? A Survey of Climates and Built Environments in Studies of HCI outdoors
We found significant gaps in the climates and built environments used as settings for studies of HCI outdoors. The experience of using a computer outdoors varies widely depending on location-specific
Multimodal and Multicultural Field Agents: Considerations for “outside-the-lab” Studies
  • M. Rehm
  • Sociology
    Multimodal Agents for Ageing and Multicultural Societies
  • 2021


Deep cover HCI
Controlling In-the-Wild Evaluation Studies of Public Displays
It is proposed that a controlled in-the-wild study offers a viable alternative when evaluating more complex interaction methods in public space, hereby potentially reducing the practical efforts of in- the-wild studies to involve participants.
What do lab-based user studies tell us about in-the-wild behavior?: insights from a study of museum interactives
This work analyzes and compares data from lab-based user studies of prototype museum installations and the subsequent deployment of these systems in a museum, finding that social behavior patterns in the museum differed in several aspects between the settings.
How to evaluate public displays
An overview of study types, paradigms, and methods for evaluation both in the lab and in the real world is provided and a set of guidelines for researchers and practitioners alike to be applied when evaluating public displays are provided.
Interaction design gone wild: striving for wild theory
Interaction designers, with less technical expertise and modest resources, can conjure, create, and deploy a diversity of prototypes in all manner of places in the everyday world. The outcome of
The Anonymous Audience Analyzer: Visualizing Audience Behavior in Public Space
This work presents a tool that allows scenes in front of a display to be reconstructed from Kinect data and visualized in a virtual environment so that the privacy of the audience can be preserved while allowing display owners to run in-depth investigations of their display installations.
Into the wild: challenges and opportunities for field trial methods
Using a trial of trials, the practices of investigators and participants were examined - documenting demand characteristics, the interdependence of how trials are run and the result they produce, and how trial results can be dependent on the insights of a subset of trial participants.
Deep Cover HCI: A Case for Covert Research in HCI
It is argued that covert research can be completed rigorously and ethically to expand the knowledge of ubiquitous technologies and there is clear value in this approach, reflect on the ethical issues of such investigations, and describe the ethical guidelines for completing Deep Cover HCI Research.
Being in the thick of in-the-wild studies: the challenges and insights of researcher participation
The value of researcher participation in contributing to the way a researcher understands participant responses: aiding rapport, promoting empathy and stimulating the researcher to reflect on their own assumptions are demonstrated.
Exiting the Cleanroom: On Ecological Validity and Ubiquitous Computing
It is found that developers have difficulty creating prototypes that are both robust enough for realistic use and able to handle ambiguity and error and that they struggle to gather useful data from evaluations because critical events occur infrequently.