Heiko Neumann

Learn More
Motion of an extended boundary can be measured locally by neurons only orthogonal to its orientation (aperture problem) while this ambiguity is resolved for localized image features, such as corners or nonocclusion junctions. The integration of local motion signals sampled along the outline of a moving form reveals the object velocity. We propose a new(More)
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not(More)
A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback(More)
A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce(More)
We have previously developed a neurodynamical model of motion segregation in cortical visual area V1 and MT of the dorsal stream. The model explains how motion ambiguities caused by the motion aperture problem can be solved for coherently moving objects of arbitrary size by means of cortical mechanisms. The major bottleneck in the development of a reliable(More)
The neural mechanisms underlying motion segregation and integration still remain unclear to a large extent. Local motion estimates often are ambiguous in the lack of form features, such as corners or junctions. Furthermore, even in the presence of such features, local motion estimates may be wrong if they were generated near occlusions or from transparent(More)
In this pilot study, a neural architecture for temporal emotion recognition from image sequences is proposed. The investigation aims at the development of key principles in an extendable experimental framework to study human emotions. Features representing temporal facial variations were extracted within a bounding box around the face that is segregated(More)
In this work, we present a neural model simulating parts of the motion and the form pathway of the visual cortex. It is shown how the visual features motion, disparity, and form that are represented in a distributed way in areas V1, V2, and MT mutually interact at several levels. Thus, their information is shared without the need of explicit neural(More)