• Corpus ID: 51891132

What am I searching for?

@article{Zhang2018WhatAI,
  title={What am I searching for?},
  author={Mengmi Zhang and Jiashi Feng and Joo Hwee Lim and Qi Zhao and Gabriel Kreiman},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.11926}
}
Can we infer intentions and goals from a person's actions? As an example of this family of problems, we consider here whether it is possible to decipher what a person is searching for by decoding their eye movement behavior. We conducted two human psychophysics experiments on object arrays and natural images where we monitored subjects' eye movements while they were looking for a target object. Using as input the pattern of "error" fixations on non-target objects before the target was found, we… 
2 Citations

Figures and Tables from this paper

Tracking the Mind’s Eye: Primate Gaze Behavior during Virtual Visuomotor Navigation Reflects Belief Dynamics

It is shown that gaze dynamics play a key role in action-selection during challenging visuomotor behaviours, and may possibly serve as a window into the subject’s dynamically evolving internal beliefs.

Eye Gaze Map as an Efficient State Encoder for Underwater Task Automation

An image based framework to automate underwater routine tasks via imitation learning utilizes the gaze information of the operator for extracting task-relevant information from the raw image input by an encoding network.

References

SHOWING 1-10 OF 39 REFERENCES

Eye can read your mind: decoding gaze fixations to reveal categorical search targets.

It is concluded that information about a people's search goal exists in fixation behavior, and that this information can be behaviorally decoded to reveal a search target-essentially reading a person's mind by analyzing their fixations.

There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task.

This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition and can localize targets in cluttered images, and predicts single-trial behavior in a search task.

Decoding What People See from Where They Look: Predicting Visual Stimuli from Scanpaths

A new metric for quantifying the importance of saliency map features, based on discriminability between images, is proposed, as well as a new method for comparing presentsaliency map efficacy metrics.

A computational model for task inference in visual search.

A probabilistic framework to infer the ongoing task in visual search by revealing what the subject is looking for during a search process is developed, and a single-state and a multi-state HMM is suggested to serve as the cognitive process model of attention for the easy and difficult tasks, respectively.

Similar Neural Representations of the Target for Saccades and Perception during Search

This work uses classification image analysis, a form of reverse correlation, to estimate the behavioral receptive fields of the visual mechanisms responsible for saccadic and perceptual responses during the same visual search task, suggesting that similar neural mechanisms are responsible for both perception and oculomotor action during search.

Visual search in noise: revealing the influence of structural cues by gaze-contingent classification image analysis.

It is demonstrated that even in very noisy displays, observers do not search randomly, but in many cases they deploy their fixations to regions in the stimulus that resemble some aspect of the target in their local image features.

Neural mechanisms of selective visual attention.

The two basic phenomena that define the problem of visual attention can be illustrated in a simple example and selectivity-the ability to filter out un­ wanted information is illustrated.

Defending Yarbus: eye movements reveal observers' task.

Yarbus's idea that human eye-movement patterns are modulated top down by different task demands is supported by the data and continues to be an inspiration for future computational and experimental eye- Movement research.

Top-down control of visual attention in object detection

The results validate the proposition that top-down information from visual context modulates the saliency of image regions during the task of object detection.