Towards accurate and practical predictive models of active-vision-based visual search

@article{Kieras2014TowardsAA,
  title={Towards accurate and practical predictive models of active-vision-based visual search},
  author={David E. Kieras and Anthony J. Hornof},
  journal={Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
  year={2014}
}
  • D. Kieras, A. Hornof
  • Published 26 April 2014
  • Computer Science
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Being able to predict the performance of interface designs using models of human cognition and performance is a long-standing goal of HCI research. This paper presents recent advances in cognitive modeling which permit increasingly realistic and accurate predictions for visual human-computer interaction tasks such as icon search by incorporating an "active vision" approach which emphasizes eye movements to visual features based on the availability of features in relationship to the point of… 
Approximate optimal control model for visual search tasks
TLDR
A deep reinforcement learning algorithm is used to solve the Partially Observable Markov Decision Process (POMDP) and the results show that visual search strategies can be explained as an approximately optimal adaptation to the theory of information processing constraints, utility and ecology of the task.
An Adaptive Model of Gaze-based Selection
TLDR
A computational model of the control of eye movements in gaze-based selection is presented as an optimal sequential planning problem bounded by the limits of the human visual and motor systems and use reinforcement learning to approximate optimal solutions.
University of Birmingham An adaptive model of gaze-based selection
Gaze-based selection has received signifcant academic attention over a number of years. While advances have been made, it is possible that further progress could be made if there were a deeper
The adaptation of visual search to utility, ecology and design
Adaptive feature guidance: Modelling visual search with graphical layouts
Visual Search Without Selective Attention: A Cognitive Architecture Account
  • D. Kieras
  • Psychology, Biology
    Top. Cogn. Sci.
  • 2019
TLDR
A cognitive architectural model is presented that shows the interaction between visual mechanisms and task strategy and represents a more comprehensive and fruitful approach to visual search than the dominant theory.
Human Visual Search as a Deep Reinforcement Learning Solution to a POMDP
TLDR
A new computational model is reported that shows how strategies for visual search are an emergent consequence of perceptual/motor constraints and approximately optimal strategies and solves a Partially Observable Markov Decision Process (POMDP) using deep Q-learning to acquire strategies that optimise the tradeoff between speed and accuracy.
Modelling Learning of New Keyboard Layouts
TLDR
A model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search, which allows predicting search times and visual search patterns for completely and partially new layouts.
Cognitive architecture enables comprehensive predictive models of visual search
Abstract With a simple demonstration model, Hulleman & Olivers (H&O) effectively argue that theories of visual search need an overhaul. We point to related literature in which visual search is
A Cognitive Model of How People Make Decisions Through Interaction with Visual Displays
TLDR
A cognitive model of how people make decisions through interaction based on the assumption that interaction for decision making is an example of a Partially Observable Markov Decision Process (POMDP) in which observations are made by limited perceptual systems that model human foveated vision and decisions aremade by strategies that are adapted to the task.
...
1
2
3
4
...

References

SHOWING 1-10 OF 42 REFERENCES
A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction
TLDR
A detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as “active vision” is described, built using the Executive Process-Interactive Control cognitive architecture.
Modeling the Visual Search of Displays: A Revised ACT-R Model of Icon Search Based on Eye-Tracking Data
TLDR
An eye-tracking study of the task showed that participants rarely refixated icons that they had previously examined, and that participants used an efficient search strategy of examining distractor icons nearest to their current point of gaze, integrated into an ACT-R model of thetask using EMMA and a "nearest" strategy.
A STUDY OF VISUAL SEARCH USING EYE MOVEMENT RECORDINGS: VALIDATION STUDIES.
TLDR
It was concluded that, for accurate prediction of search times in complex situations, as represented by information to be located in maps, more information is required about two aspects of the search process -- the effect of target visibility and the peripheral discriminability of alphanumerics.
Predictive human performance modeling made easy
TLDR
This paper describes a development system in which designers generate predictive cognitive models of user behavior simply by demonstrating tasks on HTML mock-ups of new interfaces, enabling more rapid development of predictive models, with more accurate results, than previously published models of these tasks.
Modeling visual search of displays of many objects: The role of differential acuity and fixation memory
TLDR
A cognitive architecture model is presented that accounts for the effects using differential visual acuity and fixation memory provided by a persistent visual store that provides an approximate upper bound on the duration of fixation memory, and some approximate acuity functions for modeling visual search.
The persistent visual store as the locus of fixation memory in visual search tasks
  • D. Kieras
  • Psychology
    Cognitive Systems Research
  • 2011
Visual Availability and Fixation Memory in Modeling Visual Search using the EPIC Architecture
A set of eye movement data from a visual search task using realistically complex and numerous stimuli was modeled with the EPIC architecture, which provides direct support for oculomotor constraints
Automating Human-Performance Modeling at the Millisecond Level
TLDR
The tool described here provides an engine for CPM-GOMS that may facilitate computational modeling of human performance at the millisecond level, taking advantage of reusable behavior templates and their efficacy for generating zero-parameter a priori predictions of complex human behavior.
Cognitive processes in eye guidance
1. Visual extraction processes and regressive saccades in reading 2. Sources of information for the programming of short- and long-range regressions during reading 3. Word skipping: implications for
Visual Search has Memory
TLDR
Monitoring subjects' eye movements during a visual search task suggested that visual search does have memory, and no evidence suggesting that fixations were guided by amnesic covert scans that scouted the environment for new items during fixations.
...
1
2
3
4
5
...