Corpus ID: 231709452

Predicting the future with a scale-invariant temporal memory for the past

@article{Goh2021PredictingTF,
  title={Predicting the future with a scale-invariant temporal memory for the past},
  author={Wei Zhong Goh and Varun Ursekar and Marc W Howard},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.10953}
}
In recent years it has become clear that the brain maintains a temporal memory of recent events stretching far into the past. This paper presents a neurally-inspired algorithm to use a scale-invariant temporal representation of the past to predict a scale-invariant future. The result is a scale-invariant estimate of future events as a function of the time at which they are expected to occur. The algorithm is time-local, with credit assigned to the present event by observing how it affects the… Expand
1 Citations

Figures from this paper

The learning of prospective and retrospective cognitive maps within neural circuits
TLDR
A significant conceptual reframing of the neurobiological study of associative learning, memory, and decision making is presented, demonstrating that many neural signals and behaviors that seem inflexible and non-cognitive can result from retrospective cognitive maps. Expand

References

SHOWING 1-10 OF 46 REFERENCES
Optimally fuzzy temporal memory
TLDR
A fuzzy memory system is constructed that optimally sacrifices the temporal accuracy of information in a scale-free fashion in order to represent prediction-relevant information from exponentially long timescales. Expand
Estimating Scale-Invariant Future in Continuous Time
TLDR
A computational mechanism, developed based on work in psychology and neuroscience, that efficiently computes an estimate of inputs as a function of future time on a logarithmically compressed scale and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward. Expand
Predicting the Future with Multi-scale Successor Representations
TLDR
An ensemble of SRs with multiple scales is proposed and it is shown that the derivative of multi-scale SR can reconstruct both the sequence of expected future states and estimate distance to goal, and can be computed linearly. Expand
Neural Mechanism to Simulate a Scale-Invariant Future
TLDR
It is shown that the phenomenon of phase precession of neurons in the hippocampus and ventral striatum correspond to the cognitive act of future prediction, and results in Weber-Fechner spacing for the representation of both past (memory) and future (prediction) timelines. Expand
Temporal-Difference Reinforcement Learning with Distributed Representations
TLDR
The distributed representation of belief provides an explanation for the decrease in dopamine at the conditioned stimulus seen in overtrained animals, for the differences between trace and delay conditioning, and for transient bursts of dopamine seen at movement initiation. Expand
A Local Temporal Difference Code for Distributional Reinforcement Learning
TLDR
The Laplace code is introduced: a local temporal difference code for distributional reinforcement learning that is representationally powerful and computationally straightforward and recovers the temporal evolution of the immediate reward distribution, indicating all possible rewards at all future times. Expand
A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex
TLDR
Taken together, these findings suggest that the primate entorhinal cortex uses a spectrum of time constants to construct a temporal record of the past in support of episodic memory. Expand
Learning to Predict by the Methods of Temporal Differences
  • R. Sutton
  • Computer Science
  • Machine Learning
  • 2005
TLDR
This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior – and proves their convergence and optimality for special cases and relation to supervised-learning methods. Expand
Stimulus Representation and the Timing of Reward-Prediction Errors in Models of the Dopamine System
TLDR
An improved fit mostly derives from the absence of large negative errors in the new model, suggesting that dopamine alone can encode the full range of TD errors in these situations, including those when rewards are omitted or received early. Expand
A reservoir of time constants for memory traces in cortical neurons
TLDR
A flexible memory system in which neural subpopulations with distinct sets of long or short memory timescales may be selectively deployed according to the task demands is suggested. Expand
...
1
2
3
4
5
...