A Rigorous View of Mode Confusion

@inproceedings{Bredereke2002ARV,
  title={A Rigorous View of Mode Confusion},
  author={Jan Bredereke and Axel Lankenau},
  booktitle={SAFECOMP},
  year={2002}
}
Not only in aviation psychology, mode confusion is recognised as a significant safety concern. [] Key Method In our modelling approach, we extend the commonly used distinction between the machine and the user's mental model of it by explicitly separating these and their safety-relevant abstractions. Furthermore, we show that distinguishing three different interfaces during the design phase reduces the potential for mode confusion. A result is a new classification of mode confusions by cause, leading to a…
An approach to formal verification of human–computer interaction
TLDR
This paper focuses on system malfunctions due to human actions, and demonstrates that the verification methodology can detect a variety of realistic, potentially erroneous actions, which emerge from the combination of a poorly designed device and cognitively plausible human behaviour.
Modelling the User
TLDR
This research overviews research on the formal modelling of user behaviour, generic user modelling, as a form of usability evaluation looking for design flaws that lead to systematic human error, and combines a user model with a device model to gain a model of the system as a whole.
Automatic Detection of Potential Automation Surprises for ADEPT Models
TLDR
The proposed analysis method is based on a conformance relation between the model of the actual system and a mental model of it, that is, its behavior as perceived by the operator, and can automatically generate a so-called minimal full-control mental model for a given system.
On Preventing Telephony Feature Interactions which are Shared-Control Mode Confusions
TLDR
It is demonstrated that many undesired telephony feature interactions are also shared-control mode confusions, and several measures for preventing mode confusion can be applied to this kind of feature interaction.
Formal Analysis of Multiple Coordinated HMI Systems
TLDR
This chapter discusses two novel frameworks developed at NASA for the design and analysis of human–machine interaction problems and captures the complexity of modern HMI systems by taking a multi-agent approach to modeling and analyzing multiple human agents interacting with each other as well as with automation.
A formal framework for design and analysis of human-machine interaction
TLDR
This paper presents a methodology and an associated framework for using the above and other formal method based algorithms to support the design of HMI systems, and can be used for modelling H MI systems and analysing models against HMI vulnerabilities.
A Safe and Robust Approach to Shared-Control via Dialogue
TLDR
A dialogue centric cognitive control architecture is presented, which utilizes both agent-oriented programming and formal methods and describes a formally modeled dialogue manager that sits at the heart of the control system.
State Event Models for the Formal Analysis of Human-Machine Interactions
TLDR
This paper describes how the models that the HMI analysis framework handles are extended to allow adequate representation of ADEPT models, and provides a property-preserving reduction from these extended models to LTSs, to enable application of the LTS-based formal analysis algorithms.
Justifying Usability Design Rules Based on a Formal Cognitive Model
TLDR
This work examines how a formal cognitive architecture, encapsulating results from the cognitive sciences, can be used to justify design rules both semi-formally and formally.
...
...

References

SHOWING 1-10 OF 50 REFERENCES
An automated method to detect potential mode confusions
  • J. Rushby, J. Crow, E. Palmer
  • Computer Science
    Gateway to the New Millennium. 18th Digital Avionics Systems Conference. Proceedings (Cat. No.99CH37033)
  • 1999
TLDR
This work is interested in whether a design is prone to mode confusions, and for this purpose it is more useful to compare the design against a generic mental model rather than that of an individual.
Modes in Human-Machine Systems: Constructs, Representation, and Classification
TLDR
This article surveys and discusses human interaction with automated control systems that employ modes and uses the Statecharts language to describe the 3 types of modes that are commonly found in modem control systems: interface, functional, and supervisory modes.
Beyond Mode Error: Supporting Strategic Knowledge Structures to Enhance Cockpit Safety
TLDR
It is shown that errors, which occur within such complexity, cannot easily be described in terms of individual tasks and their component actions, and is asserted that the knowledge gap identified arises as the result of a failure in such a strategy and a new design solution is developed based on this re- classification.
A formal methods approach to the analysis of mode confusion
TLDR
This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase confidence in the safety of the implementation.
Identifying mode confusion potential in software design
TLDR
The goal of the research is to create and evaluate a methodology for integrated design of complex systems, including design of the automation and the human tasks, that minimizes human error through appropriate system and operator task design.
Unmasking Mode Errors: A New Application of Task Knowledge Principles to the Knowledge Gaps in Cockpit Design
TLDR
It is shown that this is a fundamental misclassification and the existence of a deeper problem, which is identified as a task knowledge gap [Johnson 1992] between operator and system, and a new design solution is suggested, based on this re-classification.
Analyzing Software Specifications for Mode Confusion Potential
TLDR
This paper describes an approach to detecting error-prone automation features early in the development process while significant changes can still be made to the conceptual design of the system.
Analyzing Mode Confusion via Model Checking
TLDR
Whether state-exploration techniques, e.g., model checking, are better able to achieve this task than theorem proving are investigated and also to compare the verification tools MurΦ, SMV, and Spin for the specific application.
Deriving human-error tolerance requirements from tasks
TLDR
A task notation based on CSP is described which helps to elicit requirements on human-error tolerance expressed as functional properties of the system and is used to analyse an engine fire recovery procedure in order to derive human error tolerance requirements.
Models and Mechanized Methods that Integrate Human Factors into Automation Design
TLDR
This paper uses a simple example to demonstrate how automated finite-state techniques can be used to explore autopilot design options, and suggests additional applications for this technique, including the validation of empirically-derived, minimal mental models of autopilot behavior.
...
...