Undo and erase events as indicators of usability problems

@article{Akers2009UndoAE,
  title={Undo and erase events as indicators of usability problems},
  author={David Akers and Matthew S. Simpson and Robin Jeffries and Terry Winograd},
  journal={Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
  year={2009}
}
One approach to reducing the costs of usability testing is to facilitate the automatic detection of critical incidents: serious breakdowns in interaction that stand out during software use. This research evaluates the use of undo and erase events as indicators of critical incidents in Google SketchUp (a 3D-modeling application), measuring an indicator's usefulness by the numbers and types of usability problems discovered. We compared problems identified using undo and erase events to problems… 

Figures and Tables from this paper

Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications
TLDR
The results from three experiments demonstrate that backtracking events can be effective indicators of usability problems in creation-oriented applications, and can yield a cost-effective alternative to traditional laboratory usability testing.
EVALUATING THE USABILITY OF ERP SYSTEMS : WHAT CAN CRITICAL INCIDENTS TELL US ?
TLDR
A laboratory-based empirical usability evaluation of a popular ERP system was conducted using both user-reported and expert-observed critical incidents, arguing that this approach yields a more detailed and representative view of ERP usability problems than that provided by expert evaluations alone, while being less dependent on users’ memories than interview-based studies.
Feedlack detects missing feedback in web applications
TLDR
FeedLack is presented, a tool that explores the full range of web applications' behaviors for one class of usability problems, namely that of missing feedback, by enumerating control flow paths originating from user input, identifying paths that lack output-affecting code.
Erp Usability Issues From The User And Expert Perspectives1
TLDR
This study investigates how negative "critical incidents" encountered by users can improve knowledge and understanding of ERP usability problems and concludes that augmenting user-system interactions with expert observations yields a deeper understanding of the types of usability issues that must be addressed.
An Investigation of Metrics for the In Situ Detection of Software Expertise
TLDR
The results show the existence of significant correlations between metrics calculated from in situ usage logs, and task-based user expertise assessments from a laboratory study, and the implications of the results and how future software applications may be able to measure and leverage knowledge of the expertise of its users.
Characterizing the usability of interactive applications through query log analysis
TLDR
This paper introduces CUTS and describes an automated process for harvesting, ordering, labeling, filtering, and grouping search queries related to a given product that can be assembled in minutes, is timely, has a high degree of ecological validity, and is arguably less prone to self-selection bias than data gathered via traditional usability methods.
Identifying emergent behaviours from longitudinal web use
TLDR
This work employs a remote capture tool that provides longitudinal low-level interaction data that is easily deployable into any Web site allowing deployments in-the-wild and is completely unobtrusive.
Experiences with usability testing: Effects of thinking aloud and moderator presence
TLDR
A significant effect of the moderator presence is found in the users' subjective rating, as users with a moderator next to them rate the system preferences significantly higher than participants performing alone.
Evaluating the collaborative critique method
We introduce a new usability walkthrough method called Collaborative Critique (CC), which is inspired by the human-computer collaboration paradigm of system-user interaction. This method applies a
Looking Back: Retrospective Study Methods for HCI
TLDR
In this chapter the think-aloud protocol and retrospective cued recall methods are illustrated in the domain of “searching on the Internet,” but these methods are broadly applicable.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 32 REFERENCES
Remote evaluation for post-deployment usability improvement
TLDR
A cost-effective remote usability evaluation method, based on real users self-reporting critical incidents encountered in real tasks performed in their normal working environments, which shows that users with only brief training can identify, report, and rate the severity level of their own critical incidents.
Damaged Merchandise? A Review of Experiments That Compare Usability Evaluation Methods
TLDR
In this review, the design of 5 experiments that compared usability evaluation methods (UEMs) are examined, showing that small problems in the way these experiments were designed and conducted call into serious question what the authors thought they knew regarding the efficacy of various UEMs.
The user action framework: a reliable foundation for usability engineering support tools
TLDR
How high reliability in terms of agreement by users on what the User Action Framework means and how it is used is essential for its role as a common foundation for the tools is described and supported with strongly positive results of a summative reliability study.
An Evaluation of Critical Incidents for Software Documentation Design
TLDR
The development and validation of critical incidents as an effective tool for the incorporation of end-user feedback into the simultaneous design and evaluation of both online and hardcopy documentation is investigated.
Contemporaneous versus Retrospective User-Reported Critical Incidents in Usability Evaluation
TLDR
Retrospective reporting enables controlled comparisons of user-reported and expert-reported methods, since session recordings can be shown to multiple reviewers, and allows for the collection of incidents without disrupting traditional usability measures, such as time to complete task.
Comparative usability evaluation: critical incidents and critical threads
TLDR
This paper discusses how the earlier claims analysis was used to orient and simplify the authors' current evaluation efforts, and extents this work to the comparative usability analysis of a related artifact.
A mathematical model of the finding of usability problems
For 11 studies, we find that the detection of usability problems as a function of number of users tested or heuristic evaluators employed is well modeled as a Poisson process. The model can be used
The evaluator effect in usability tests
TLDR
In this study, four evaluators analyzed four videotaped usability test sessions and found that the evaluator effect had little effect on the reliability of usability tests.
Usability testing: what have we overlooked?
TLDR
Evidence is provided suggesting that the focus be shifted to task coverage instead of participant recruitment on usability test performance, as no significant correlation between the percentage of problems found or of new problems and number of test users was found.
Remote evaluation: the network as an extension of the usability laboratory
TLDR
Traditional user interface evaluation usually is conducted in a laboratory where users are observed directly by evaluators, but methods for remote usability evaluation wherein the evaluator is separated in space and/or time from the user are considered.
...
1
2
3
4
...