Comparative usability evaluation

  title={Comparative usability evaluation},
  author={Rolf Molich and Meghan R. Ede and Klaus Kaasgaard and Barbara Karyukin},
  journal={Behaviour \& Information Technology},
  pages={65 - 74}
This paper reports on a study assessing the consistency of usability testing across organisations. [] Key Result Even the tasks used by most or all teams produced very different results - around 70% of the findings for each of these tasks were unique. Our main conclusion is that our simple assumption that we are all doing the same and getting the same results in a usability test is plainly wrong.

Comparing Usability Problem Identification and Description by Practitioners and Students

There was no difference in the number of problems reported by students and practitioners, but there was a difference in their ratings for following several of the guidelines, which provides a more complete assessment of usability reports.

Usability Problem Reports for Comparative Studies: Consistency and Inspectability

It was found that consistency of single analyst teams varied considerably and that a method like SlimDEVAN can help in making the analysis process and findings more inspectable.

Heuristic evaluation: Comparing ways of finding and reporting usability problems

Component-Specific Usability Testing

A meta-analysis is carried out on the results of six experiments to support the claim that component-specific usability measures are on average statistically more powerful than overall usability measures when comparing different versions of a part of a system.

Are We Testing Utility? Analysis of Usability Problem Types

Usability problems and related redesign recommendations are the main outcome of usability tests although both are questioned in terms of impact in the design process, and early usability testing with a think-aloud protocol and an open task structure measure both utility and usability equally well.

Making a difference: a survey of the usability profession in Sweden

The results indicate, among other things, that management support and project management support are essential for the usability worker, and they face problems such as, usability and user involvement having low priority in the projects.

On the performance of novice evaluators in usability evaluations

The paper suggests that when novice evaluators have to be employed for usability evaluations and it is important to find most usability problems then parallel usability evaluations can provide overall valid and thorough results.

Making usability recommendations useful and usable

The study finds that only 14 of the 84 studied comments addressing six usability problems contained recommendations that were both useful and usable, which means that half of the recommendations were not useful at all.

Describing usability problems: are we sending the right message?

Evidence that the authors sometimes miss the mark when they describe usability problems and solutions comes from an examination of the usability comments from the fourth Comparative Usability Evaluation (CUE-4), which contains 647 individual comments.

An Assessment of the Usability Quality Attribute in Open Source Software

It seems, however, that the lack of a usability team in OSS (Open Source Software) products has made the OSS products less easy to use for inexperienced users.



The evaluator effect in usability tests

In this study, four evaluators analyzed four videotaped usability test sessions and found that the evaluator effect had little effect on the reliability of usability tests.

The Evaluator Effect in Usability Studies: Problem Detection and Severity Judgments

Both detection of usability problems and selection of the most severe problems are subject to considerable individual variability.

The Evaluator Effect: A Chilling Fact About Usability Evaluation Methods

It is certainly notable that a substantial evaluator effect persists for evaluators who apply the strict procedure of CW or observe users thinking out loud, and it is highly questionable to use a TA with 1evaluator as an authoritative statement about what problems an interface contains.

Damaged Merchandise? A Review of Experiments That Compare Usability Evaluation Methods

In this review, the design of 5 experiments that compared usability evaluation methods (UEMs) are examined, showing that small problems in the way these experiments were designed and conducted call into serious question what the authors thought they knew regarding the efficacy of various UEMs.

A Practical Guide to Usability Testing

A Practical Guide to Usability Testing discusses the full range of testing options from quick studies with a few subjects to more formal tests with carefully designed controls and includes forms that you can use or modify to conduct a usability test and layouts of existing labs that will help you build your own.

A mathematical model of the finding of usability problems

For 11 studies, we find that the detection of usability problems as a function of number of users tested or heuristic evaluators employed is well modeled as a Poisson process. The model can be used

SUS: A 'Quick and Dirty' Usability Scale

This chapter describes the System Usability Scale (SUS) a reliable, low-cost usability scale that can be used for global assessments of systems usability.

Testing web sites: five users is nowhere near enough

These findings differ sharply from rules-of-thumb derived from earlier work by Virzi and Nielsen commonly viewed as "industry standards."

Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests

The Handbook of Usability Testing gives you practical, step-by-step guidelines in plain English to design and administer extremely reliable tests to ensure that people find it easy and desirable to use.

Thinking aloud: reconciling theory and practice

Thinking-aloud protocols may be the most widely used method in usability testing, but the descriptions of this practice in the usability literature and the work habits of practitioners do not conform