Predicting human visuomotor behaviour in a driving task

@article{Johnson2014PredictingHV,
  title={Predicting human visuomotor behaviour in a driving task},
  author={Leif M. Johnson and Brian T. Sullivan and Mary M. Hayhoe and Dana H. Ballard},
  journal={Philosophical Transactions of the Royal Society B: Biological Sciences},
  year={2014},
  volume={369}
}
The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We approach this problem by decomposing complex visual behaviour into individual task modules that require independent sources of visual information for control… 

Figures from this paper

Modeling Task Control of Eye Movements

MinireviewModeling Task Control of Eye Movements

TLDR
Advances in eye tracking during natural vision, together with the development of probabilistic modeling techniques, have provided insight into how the cognitive agenda might be included in the specification of fixations, and specific examples can reveal general principles in gaze control.

Classification of Human Gaze in Spatial Guidance and Control

TLDR
With the gaze models, the HMM method accurately identifies gaze patterns in gaze trajectories, and in addition enables three types of applications, including the application to surgery tasks, enabling surgical skill analysis.

Improving Autonomous Driving Agents using Bio-Inspired Visual Attention

TLDR
It is found that a dual-branch architecture which processes both raw and attention-masked images substantially outperforms all other models in terms of average prediction error, validating the hypothesis that a visual attention model learned from human data can bolster the performance of machine learning agents in complex settings.

Effects of an Active Visuomotor Steering Task on Covert Attention

In complex dynamic tasks such as driving it is essential to be aware of potentially important targets in peripheral vision. While eye tracking methods in various driving tasks have provided much

Gaze Strategies in Driving–An Ecological Approach

Human performance in natural environments is deeply impressive, and still much beyond current AI. Experimental techniques, such as eye tracking, may be useful to understand the cognitive basis of

AGIL: Learning Attention from Human for Visuomotor Tasks

TLDR
This work proposes the AGIL (Attention Guided Imitation Learning) framework, a deep neural network that can predict human gaze positions and visual attention with high accuracy and significantly improves the action prediction accuracy and task performance.

What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality?

Abstract Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two

Systematic Observation of an Expert Driver's Gaze Strategy—An On-Road Case Study

TLDR
This paper presents and qualitatively analyze an expert driver's gaze behavior in natural driving on a real road, with no specific experimental task or instruction, and discusses the laws in terms of unresolved issues in driver modeling and open challenges for experimental and theoretical development.

Experimental Framework for Investigating First Person Guidance and Perception

TLDR
Analysis of human trajectory and gaze data while performing simulated first-person motion guidance tasks suggests that this hierarchical structure extends to the combined action-perception process.
...

References

SHOWING 1-10 OF 29 REFERENCES

The role of uncertainty and reward on eye movements in a virtual driving task.

TLDR
Qualitative support is lent for the primary variables controlling gaze allocation proposed in the Sprague and Ballard model by showing that drivers more closely monitor the speedometer if it had a high level of uncertainty, but only if it was also associated with high task priority or implicit reward.

Gaze Allocation Analysis for a Visually Guided Manipulation Task

TLDR
A model is defined that poses the problem of where to look as one of maximising task performance by reducing task relevant uncertainty and is implemented and test on a simulated humanoid robot which has to move objects from a table into containers.

Eye Movements for Reward Maximization

TLDR
A new model of human eye movements that directly ties eye movements to the ongoing demands of behavior is introduced and simulations show the protocol is superior to a simple round robin scheduling mechanism.

Computational modelling of visual attention

TLDR
Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment, providing a framework for a computational and neurobiological understanding of visual attention.

Trade-offs between gaze and working memory use.

  • Jason A. DrollM. Hayhoe
  • Psychology, Biology
    Journal of experimental psychology. Human perception and performance
  • 2007
TLDR
The results reveal that attentional selection, fixations, and use of working memory reflect a dynamic optimization with respect to a set of constraints, such as task predictablity and memory load, and reveal that change blindness depends critically on the local task context.

A modular reinforcement learning model for human visuomotor behavior in a driving task

TLDR
A task scheduling framework for studying human eye movements in a realistic 3D driving simulation using a reinforcement learning algorithm with “task modules” that make learning tractable and provide a cost metric for behaviors.

The prominence of behavioural biases in eye guidance

When attempting to understand where people look during scene perception, researchers typically focus on the relative contributions of low- and high-level cues. Computational models of the

Where to look next? Eye movements reduce local uncertainty.

TLDR
A rigorous analysis of sequential fixation placement reveals that observers may be using a local rule: fixate only the most informative locations, that is, reduce local uncertainty.

Optimal eye movement strategies in visual search

TLDR
This work derives the ideal bayesian observer for search tasks in which a target is embedded at an unknown location within a random background that has the spectral characteristics of natural scenes and finds that humans achieve nearly optimal search performance, even though humans integrate information poorly across fixations.

The Roles of Vision and Eye Movements in the Control of Activities of Daily Living

TLDR
Although the actions of tea-making are ‘automated’ and proceed with little conscious involvement, the eyes closely monitor every step of the process, suggesting that this type of unconscious attention must be a common phenomenon in everyday life.