A mathematical model of the finding of usability problems

  title={A mathematical model of the finding of usability problems},
  author={Jakob Nielsen and Thomas K. Landauer},
  journal={Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems},
  • J. Nielsen, T. Landauer
  • Published 1 May 1993
  • Computer Science
  • Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
For 11 studies, we find that the detection of usability problems as a function of number of users tested or heuristic evaluators employed is well modeled as a Poisson process. [] Key Result For a “medium” example, we estimate that 16 evaluations would be worth their cost, with maximum benefit/cost ratio at four.

Figures and Tables from this paper

Estimating sample size for usability testing
This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”, and showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.
Number of Subjects in Web Usability Testing
This study provides evidence of the usefulness of SPRT in situations where determination of effectiveness is the goal, at a substantial reduction of number of users, and offers an alternative approach employing the Sequential Probability Ratio Test (SPRT) to determine product effectiveness.
Effect of Level of Problem Description on Problem Discovery Rates: Two Case Studies
This analysis investigated the effect of changing the level of description of usability problems on the estimate of the problem discovery rate (p), and described a method for using p to estimate the number of problems remaining available for discovery given the constraints associated with a particular participant population, application, and set of tasks.
Determining the Effectiveness of the Usability Problem Inspector: A Theory-Based Model and Tool for Finding Usability Problems
This work developed the user action framework and a corresponding evaluation tool, the usability problem inspector, to help organize usability concepts and issues into a knowledge base and showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures.
A Tale of Two Studies
Two (very different) approaches to designing an evaluation study for the same piece of software are described and both the approaches taken, the differing results found and the authors' comments on both of these are discussed.
Analysis of combinatorial user effect in international usability tests
Five aspects of user effect are explored, including optimality of sample size, evaluator effect, effect of heterogeneous subgroups, performance of task variants, and efficiency of problem discovery.
Extreme Discount Usability Engineering
This paper explores the circumstances under which extremely discounted usability engineering techniques produce results that are worthwhile to researchers and practitioners. We present a method for
Quantifying the Usability Through a Variant of the Traditional Heuristic Evaluation Process
This case study presents the results of applying a variant to the traditional heuristic evaluation process to determine the usability degree of two web applications and serves as a framework for specialists in this field that are interested in retrieving quantitative data in addition to thetraditional usability issues that are identified.
A Survey of Empirical Usability Evaluation Methods Gslis Independent Study
This paper aims to give the reader an overview of the major empirical UEMs used in practice, and to lower the cost and time required for a formal laboratory testing.
Heterogeneity in the usability evaluation process
A simple statistical test for existence of heterogeneity in the process is contributed and the compound beta-binomial model is proposed to incorporate sources of heterogeneity and compared to the binomial model.


Refining the Test Phase of Usability Evaluation: How Many Subjects Is Enough?
Three experiments are reported in this paper that relate the proportion of usability problems identified in an evaluation to the number of subjects participating in that study, finding that 80% of the usability problems are detected with four or five subjects.
Finding usability problems through heuristic evaluation
Usability specialists were better than non-specialists at performing heuristic evaluation, and “double experts” with specific expertise in the kind of interface being evaluated performed even better.
User interface evaluation in the real world: a comparison of four techniques
A user interface for a software product was evaluated prior to its release by four groups, each applying a different technique: heuristic evaluation, software guidelines, cognitive walkthroughs, and usability testing.
Streamlining the Design Process: Running Fewer Subjects
An experiment conducted to determine the minimum number of subjects required for a usability test finds that with between 4 and 5 subjects, 80% of the usability problems are detected and that additional subjects are less and less likely to reveal new information.
Estimating the number of subjects needed for a thinking aloud test
  • J. Nielsen
  • Computer Science
    Int. J. Hum. Comput. Stud.
  • 1994
Abstract Two studies of using the thinking aloud method for user interface testing showed that experimenters who were not usability specialists could use the method. However, they found only 28-30%
Iterative user-interface design
A method for developing user interfaces by refining them iteratively over several versions, which not only eliminates problems of this nature, but also allows designers to take advantage of any insights into user needs that emerge from the tests.
The usability engineering life cycle
It is shown that the most basic elements in the usability engineering model are empirical user testing and prototyping, combined with iterative design.
When Should One Stop Testing Software
Abstract We derive an optimal rule for stopping the testing of a module of software prior to release, based on the trade-off between the cost of continued testing and the expected losses due to any
Usability inspection methods
  • J. Nielsen
  • Computer Science
    CHI 95 Conference Companion
  • 1995
Usability inspection is the generic name for a set of costeffective ways of evaluating user interfaces to find usability problems. They are fairly informal methods and easy to use.
Cost/benefit analysis for incorporating human factors in the software lifecycle
Methodologies for improvement of the interface design, an overview of the human factors element, and cost/benefit aspects are explored.