Learn More
S. J. Thorpe, D. Fize, and C. Marlot (1996) showed how rapidly observers can detect animals in images of natural scenes, but it is still unclear which image features support this rapid detection. A. B. Torralba and A. Oliva (2003) suggested that a simple image statistic based on the power spectrum allows the absence or presence of objects in natural scenes(More)
We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches(More)
Motor reaction times in humans are highly variable from one trial to the next, even for simple and automatic tasks, such as shifting your gaze to a suddenly appearing target. Although classic models of reaction time generation consider this variability to reflect intrinsic noise, some portion of it could also be attributed to ongoing neuronal processes. For(More)
Camera-based eye trackers are the mainstay of today's eye movement research and countless practical applications of eye tracking. Recently, a significant impact of changes in pupil size on the accuracy of camera-based eye trackers during fixation has been reported [Wyatt 2010]. We compared the pupil-size effect between a scleral search coil based eye(More)
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Recent studies found human response times to be as fast as 120 ms in a dual-presentation (2-AFC) setup (H. Kirchner & S. J. Thorpe, 2005). In most previous experiments, pairs of randomly chosen images were presented, frequently from very(More)
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in(More)
Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve object-based or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three(More)
Visual processing is not instantaneous, but instead our conscious perception depends on the integration of sensory input over time. In the case of Continuous Flash Suppression (CFS), masks are flashed to one eye, suppressing awareness of stimuli presented to the other eye. One potential explanation of CFS is that it depends, at least in part, on the(More)
Humans are capable of rapidly extracting object and scene category information from visual scenes, raising the question of how the visual system achieves this high speed performance. Recently, several studies have demonstrated oscillatory effects in the behavioral outcome of low-level visual tasks, hinting at a possibly cyclic nature of visual processing.(More)