Learn More
In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated(More)
Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use(More)
A computational model of inference during story comprehension is presented, in which story situations are represented distributively as points in a high-dimensional " situation-state space. " This state space organizes itself on the basis of a constructed microworld description. From the same description, causal/temporal world knowledge is extracted. The(More)
We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity(More)
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting(More)
The natural world continuously presents us with many opportunities for action, and thus a process of target selection must precede action execution. While there has been considerable progress in understanding target selection in stationary environments, little is known about target selection when we are in motion. Here we investigated the effect of(More)
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often(More)
T. Trabasso and J. Bartolone (2003) used a computational model of narrative text comprehension to account for empirical findings. The authors show that the same predictions are obtained without running the model. This is caused by the model's computational setup, which leaves most of the model's input unchanged.
  • 1