Predicting the Future With a Scale-Invariant Temporal Memory for the Past

  title={Predicting the Future With a Scale-Invariant Temporal Memory for the Past},
  author={Wei Zhong Goh and Varun Ursekar and Marc W Howard},
  journal={Neural Computation},
Abstract In recent years, it has become clear that the brain maintains a temporal memory of recent events stretching far into the past. This letter presents a neurally inspired algorithm to use a scale-invariant temporal representation of the past to predict a scale-invariant future. The result is a scale-invariant estimate of future events as a function of the time at which they are expected to occur. The algorithm is time-local, with credit assigned to the present event by observing how it… 



Optimally fuzzy temporal memory

A fuzzy memory system is constructed that optimally sacrifices the temporal accuracy of information in a scale-free fashion in order to represent prediction-relevant information from exponentially long timescales.

Estimating Scale-Invariant Future in Continuous Time

A computational mechanism, developed based on work in psychology and neuroscience, that efficiently computes an estimate of inputs as a function of future time on a logarithmically compressed scale and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward.

Neural Mechanism to Simulate a Scale-Invariant Future

It is shown that the phenomenon of phase precession of neurons in the hippocampus and ventral striatum correspond to the cognitive act of future prediction, and results in Weber-Fechner spacing for the representation of both past (memory) and future (prediction) timelines.

A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex

Taken together, these findings suggest that the primate entorhinal cortex uses a spectrum of time constants to construct a temporal record of the past in support of episodic memory.

Predicting the Future with Multi-scale Successor Representations

An ensemble of SRs with multiple scales is proposed and it is shown that the derivative of multi-scale SR can reconstruct both the sequence of expected future states and estimate distance to goal, and can be computed linearly.

Temporal maps and informativeness in associative learning

A reservoir of time constants for memory traces in cortical neurons

A flexible memory system in which neural subpopulations with distinct sets of long or short memory timescales may be selectively deployed according to the task demands is suggested.

A Local Temporal Difference Code for Distributional Reinforcement Learning

The Laplace code is introduced: a local temporal difference code for distributional reinforcement learning that is representationally powerful and computationally straightforward and recovers the temporal evolution of the immediate reward distribution, indicating all possible rewards at all future times.

Learning to Predict by the Methods of Temporal Differences

This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior – and proves their convergence and optimality for special cases and relation to supervised-learning methods.

Induction of Multiscale Temporal Structure

Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard back propagation, using hidden units that operate with different time constants.