Learn More
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence(More)
In this paper, a new algorithm is proposed for solving the path planning problem of mobile robots. The algorithm is based on Artificial Potential Field (APF) methods that have been widely used for path planning related problems for more than two decades. While keeping the simplicity of traditional APF methods, our algorithm is built upon new potential(More)
Models of eye movements during scene viewing attempt to explain distributions of fixations and do not typically address the neural basis of saccade programming. Models of saccade programming are tied more closely to neural mechanisms, but have not been applied to scenes due to limitations on the inputs that they can accept-typically just dots. This work(More)
Saccades quite systematically undershoot a peripheral visual target by about 10% of its eccentricity while becoming more variable, mainly in amplitude, as the target becomes more peripheral. This undershoot phenomenon has been interpreted as the strategic adjustment of saccadic gain downstream of the superior colliculus (SC), where saccades are programmed.(More)
Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely,(More)
Learned region sparsity has achieved state-of-the-art performance in classification tasks by exploiting and integrating a sparse set of local information into global decisions. The underlying mechanism resembles how people sample information from an image with their eye movements when making similar decisions. In this paper we incorporate the biologically(More)
In this paper, we propose a new method for automatically generating textual descriptions of images. Our method consists of two main steps: Using saliency maps, it detects the areas of interests in the image, and then creates the description by recognizing the interactions between detected objects within those areas. These interactions are modeled using the(More)
  • 1