Corpus ID: 22026778

Usability Problem Description and the Evaluator Effect in Usability Testing

@inproceedings{Capra2006UsabilityPD,
  title={Usability Problem Description and the Evaluator Effect in Usability Testing},
  author={Miranda Galadriel Capra},
  year={2006}
}
Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-users. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully… Expand
Comparing Usability Problem Identification and Description by Practitioners and Students
TLDR
There was no difference in the number of problems reported by students and practitioners, but there was a difference in their ratings for following several of the guidelines, which provides a more complete assessment of usability reports. Expand
Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP)
TLDR
CUP reliability results indicated that the expertise and experience of raters are critical factors for assessing reliability consistently, especially for the more complex attributes, and training and context are needed for applying classification schemes. Expand
The playthrough evaluation framework : reliable usability evaluation for video games
This thesis presents the playthrough evaluation framework, a novel framework for the reliable usability evaluation of first-person shooter console video games. The framework includes playthroughExpand
Barefoot usability evaluations
TLDR
Two in-depth empirical studies of supporting software development practitioners by training them to become barefoot usability evaluators show that the SWPs after 30 hours of training obtained considerable abilities in identifying usability problems and that this approach revealed a high level of downstream utility. Expand
Reporting Usability Defects: A Systematic Literature Review
TLDR
The results of this systematic literature review show that usability defect reporting processes suffer from a number of limitations, including: mixed data, inconsistency of terms and values of usability defect data, and insufficient attributes to classify usability defects. Expand
Developer Driven and User Driven Usability Evaluations
TLDR
A comprehensive literature study of research conducted in this area, where 129 papers are analyzed in terms of research focus, empirical basis, types of training participants and training costs, shows a need for further empirical research regarding long term effects of training, training costs and training in user based evaluation methods. Expand
Consolidating usability problems with novice evaluators
TLDR
It is indicated that collaborative merging causes the absolute number of UPs to deflate, and concomitantly the frequency of certain UP types as well as their severity ratings to inflate excessively. Expand
Supporting novice usability practitioners with usability engineering tools
TLDR
This work introduces a tool feature, usability problem instance records, to better support novice usability practitioners and describes the results of a study of this feature, which suggest that the feature helps to improve two aspects of the effectiveness of novice usability practitioner: reliability and quality. Expand
Analysis in practical usability evaluation: a survey study
TLDR
This work has surveyed 155 usability practitioners on the analysis in their latest usability evaluation and provides six recommendations for future research to better support analysis. Expand
How Usability Defects Defer from Non-Usability Defects? : A Case Study on Open Source Projects
TLDR
It is found that usability defects are resolved slower than non-USability defects, even for non-usability defect reports that have less information, and the promising results may be valuable to improve software development practitioners' practice. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 125 REFERENCES
The Evaluator Effect: A Chilling Fact About Usability Evaluation Methods
TLDR
It is certainly notable that a substantial evaluator effect persists for evaluators who apply the strict procedure of CW or observe users thinking out loud, and it is highly questionable to use a TA with 1evaluator as an authoritative statement about what problems an interface contains. Expand
Cooperative usability testing: complementing usability tests with user-supported interpretation sessions
TLDR
Cooperative Usability Testing (CUT), where test users and evaluators join expertise to understand the usability problems of the application evaluated, finds that interpretation sessions contribute important usability information compared to TA. Expand
Managing the 'Evaluator Effect' in User Testing
TLDR
Through detailed analysis of the data, it was possible to identify various causes for the evaluator effect, ranging from inaccuracies in logging and mishearing verbal utterances to differences in interpreting user intentions. Expand
A Comparison of Three Usability Evaluation Methods: Heuristic, Think-Aloud, and Performance Testing
TLDR
The three testing methodologies were roughly equivalent in their ability to detect a core set of usability problems on a per evaluator basis, but the heuristic and think-aloud evaluations were generally more sensitive, uncovering a broader array of problems in the user interface. Expand
Understanding Usability Issues Addressed by Three User-System Interface Evaluation Techniques
TLDR
Results showed that the cognitive walkthrough method identifies issues almost exclusively within the action specification stage, while guidelines covered more stages, and all the techniques could be improved in assessing semantic distance and addressing all stages on the evaluation side of the HCI activity cycle. Expand
Criteria For Evaluating Usability Evaluation Methods
TLDR
This article highlights specific challenges that researchers and practitioners face in comparing UEMs and provides a point of departure for further discussion and refinement of the principles and techniques used to approach UEM evaluation and comparison. Expand
The Usability Problem Taxonomy: A Framework for Classification and Analysis
TLDR
The Usability Problem Taxonomy (UPT) is presented, a taxonomic model in which usability problems detected in graphical user interfaces with textual components are classified from both an artifact and a task perspective. Expand
A Practical Guide to Usability Testing
From the Publisher: In A Practical Guide to Usability Testing, the authors begin by defining usability, advocating and explaining the methods of usability engineering and reviewing many techniquesExpand
Towards the design of effective formative test reports
TLDR
This paper defines elements in these reports and presents some early guidelines on making design decisions for a formative report, based on considerations of the business context, the relationship between author and audience, the questions that the evaluation is trying to answer, and the techniques used in the evaluation. Expand
Perspective-based Usability Inspection: An Empirical Validation of Efficacy
Inspection is a fundamental means of achieving software usability. Past research showed that the current usability inspection techniques were rather ineffective. We developed perspective-basedExpand
...
1
2
3
4
5
...