Learn More
Humans select relevant locations in a scene by means of stimulus-driven bottom-up and context-dependent top-down mechanisms. These have mainly been investigated by recording eye movements under 2D natural or 3D artificial stimulation conditions. Here we try to close that obvious gap and presented 2D and 3D versions of natural, pink, and white noise images(More)
The role of the central nucleus of the amygdala (CeN) in modulating output of noradrenaline in the forebrain was evaluated by recording extracellular, single-unit activity from the noradrenergic nucleus locus ceruleus (LC) during stimulation of the CeN. Short high-frequency trains (200 Hz) delivered at 800 microA in the CeN evoked phasic responses in 90% of(More)
In everyday life, our brains decide about the relevance of huge amounts of sensory input. Further complicating this situation, this input is distributed over different modalities. This raises the question of how different sources of information interact for the control of overt attention during free exploration of the environment under natural conditions.(More)
During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were(More)
How is contextual processing as demonstrated with simplified stimuli, cortically enacted in response to ecologically relevant complex and dynamic stimuli? Using voltage-sensitive dye imaging, we captured mesoscopic population dynamics across several square millimeters of cat primary visual cortex. By presenting natural movies locally through either one or(More)
Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of(More)
Neurons in primary visual cortex have been characterized by their selectivity to orientation, spatiotemporal frequencies, and motion direction, among others all essential parameters to decompose complex image structure. However, their concerted functioning upon real-world visual dynamics remained unobserved since most studies tested these parameters in(More)
In early visual cortex different stimulus parameters are represented in overlaid feature maps. Such functioning was extensively explored by the use of drifting gratings characterized by orientation, spatial-temporal frequency, and direction of motion. However surprisingly, the direct cortical visuotopic drift of the gratings' stripy pattern has never been(More)
In natural vision, shifts in spatial attention are associated with shifts of gaze. Computational models of such overt attention typically use the concept of a saliency map: Normalized maps of center-surround differences are computed for individual stimulus features and added linearly to obtain the saliency map. Although the predictions of such models(More)