Learn More
A goal-directed navigation model is proposed based on forward linear look-ahead probe of trajectories in a network of head direction cells, grid cells, place cells and prefrontal cortex (PFC) cells. The model allows selection of new goal-directed trajectories. In a novel environment, the virtual rat incrementally creates a map composed of place cells and(More)
The current study used fMRI in humans to examine goal-directed navigation in an open field environment. We designed a task that required participants to encode survey-level spatial information and subsequently navigate to a goal location in either first person, third person, or survey perspectives. Critically, no distinguishing landmarks or goal location(More)
We propose an extended version of our previous goal directed navigation model based on forward planning of trajectories in a network of head direction cells, persistent spiking cells, grid cells, and place cells. In our original work the animat incrementally creates a place cell map by random exploration of a novel environment. After the exploration phase,(More)
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model's flexibility in representing large real world environments using odometry information obtained from challenging(More)
Recent in vivo data show ensemble activity in medial entorhinal neurons that demonstrates 'look-ahead' activity, decoding spatially to reward locations ahead of a rat deliberating at a choice point while performing a cued, appetitive T-Maze task. To model this experiment's look-ahead results, we adapted previous work that produced a model where scans along(More)
This paper presents a novel place recognition algorithm inspired by the recent discovery of overlapping and multi-scale spatial maps in the rodent brain. We mimic this hierarchical framework by training arrays of Support Vector Machines to recognize places at multiple spatial scales. Place match hypotheses are then cross-validated across all spatial scales,(More)
Recent computational models suggest that visual input from optic flow provides information about egocentric (navigator-centered) motion and influences firing patterns in spatially tuned cells during navigation. Computationally, self-motion cues can be extracted from optic flow during navigation. Despite the importance of optic flow to navigation, a(More)
  • 1