Determining the Effectiveness of the Usability Problem Inspector: A Theory-Based Model and Tool for Finding Usability Problems

@article{Andre2003DeterminingTE,
  title={Determining the Effectiveness of the Usability Problem Inspector: A Theory-Based Model and Tool for Finding Usability Problems},
  author={Terence S. Andre},
  journal={Human Factors: The Journal of Human Factors and Ergonomics Society},
  year={2003},
  volume={45},
  pages={455 - 482}
}
  • Terence S. Andre
  • Published 2003
  • Computer Science, Medicine
  • Human Factors: The Journal of Human Factors and Ergonomics Society
Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a… Expand
Validating the User -Centered Hybrid Assessment Tool (User-CHAT): A comparative usability evaluation
TLDR
The results demonstrated that the User-CHAT attained higher effectiveness scores than the heuristic evaluation and cognitive walkthrough, suggesting that it helped evaluators identify many usability problems that actually impact users, i.e., higher thoroughness, while attenuating time and effort on issues that were not important. Expand
Usability Information Management: Prototype for Result Exploration Based on an Empirical Analysis of Use Cases
TLDR
The results of interviews with usability practitioners are presented to describe existing use cases for this type of information, and an information structure for usability information has been created. Expand
Integration of Usability Evaluation Studies via a Novel Meta-Analytic Approach: What are Significant Attributes for Effective Evaluation?
The overall discovery rates, which are the ratios of sum of unique usability problems detected by all experiment participants against the number of usability problems existed in the evaluatedExpand
A Usability Problem Inspection Tool: Development and Formative Evaluation
TLDR
This research involves the development and formative evaluation of the Usability Problem Inspection tool, a cost-effective, structured, flexible usability inspection tool that uses the User Action Framework as an underlying knowledge base. Expand
Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP)
TLDR
CUP reliability results indicated that the expertise and experience of raters are critical factors for assessing reliability consistently, especially for the more complex attributes, and training and context are needed for applying classification schemes. Expand
The Axiomatic Usability Evaluation Method
This chapter introduces a new usability evaluation method, the axiomatic evaluation method, which is developed based on the axiomatic design theory – a formalized design methodology that can be usedExpand
An Infrastructure to Support Usability Problem Data Analysis
TLDR
An infrastructure for usability problem data analysis is developed to address the need for better returns on usability engineering investments and consists of four main components: a framework, a process, tools, and semantic analysis technology. Expand
Supporting novice usability practitioners with usability engineering tools
TLDR
This work introduces a tool feature, usability problem instance records, to better support novice usability practitioners and describes the results of a study of this feature, which suggest that the feature helps to improve two aspects of the effectiveness of novice usability practitioner: reliability and quality. Expand
Analysis in practical usability evaluation: a survey study
TLDR
This work has surveyed 155 usability practitioners on the analysis in their latest usability evaluation and provides six recommendations for future research to better support analysis. Expand
Usability assessment methods
Usability assessment methods evolved from traditional human factors/ergonomics techniques beginning in the early 1980s. Following a brief historical introduction, we describe the four majorExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 156 REFERENCES
The user action framework: a reliable foundation for usability engineering support tools
TLDR
How high reliability in terms of agreement by users on what the User Action Framework means and how it is used is essential for its role as a common foundation for the tools is described and supported with strongly positive results of a summative reliability study. Expand
Product usability and process improvement based on usability problem classification
TLDR
A new taxonomic model (the Usability Problem Taxonomy) is presented which contributes to both product and process improvement and can be used to generate higher quality problem descriptions and to group those problem descriptions prior to prioritization and correction. Expand
Understanding Usability Issues Addressed by Three User-System Interface Evaluation Techniques
TLDR
Results showed that the cognitive walkthrough method identifies issues almost exclusively within the action specification stage, while guidelines covered more stages, and all the techniques could be improved in assessing semantic distance and addressing all stages on the evaluation side of the HCI activity cycle. Expand
A Pair of Techniques for Effective Interface Evaluation: Cognitive Walkthroughs and Think-Aloud Evaluations
TLDR
This study shows that cognitive walkthroughs find the most severe usability problems along with desirable new features and functionality, while think-aloud evaluations find all types of usability problems. Expand
The Usability Problem Taxonomy: A Framework for Classification and Analysis
TLDR
The Usability Problem Taxonomy (UPT) is presented, a taxonomic model in which usability problems detected in graphical user interfaces with textual components are classified from both an artifact and a task perspective. Expand
A Comparison of Three Usability Evaluation Methods: Heuristic, Think-Aloud, and Performance Testing
TLDR
The three testing methodologies were roughly equivalent in their ability to detect a core set of usability problems on a per evaluator basis, but the heuristic and think-aloud evaluations were generally more sensitive, uncovering a broader array of problems in the user interface. Expand
Criteria For Evaluating Usability Evaluation Methods
TLDR
This article highlights specific challenges that researchers and practitioners face in comparing UEMs and provides a point of departure for further discussion and refinement of the principles and techniques used to approach UEM evaluation and comparison. Expand
The Effectiveness of Usability Evaluation Methods: Determining the Appropriate Criteria
TLDR
It is found that studies do not always provide the appropriate descriptive statistics to make solid conclusions, especially in terms of validity, and some possible ways to address criterion deficiency and criterion contamination are provided. Expand
A Cost-Effective Evaluation Method for Use by Designers
TLDR
This paper argues that quantitative experimental methods may not be practical at early stages of design, but a behavioural record used in conjunction with think-aloud protocols can provide a designer with the information needed to evaluate an early prototype in a cost-effective manner. Expand
Heuristic Walkthroughs: Finding the Problems Without the Noise
  • A. Sears
  • Computer Science
  • Int. J. Hum. Comput. Interact.
  • 1997
TLDR
A new technique is described that combines the benefits of heuristic evaluations, cognitive walkthroughs, and usabilityWalkthroughs to provide more structure than heuristic evaluated but less than cognitive walk throughs. Expand
...
1
2
3
4
5
...