Continuous Online Sequence Learning with an Unsupervised Neural Network Model

  title={Continuous Online Sequence Learning with an Unsupervised Neural Network Model},
  author={Yuwei Cui and Chetan Surpur and Subutai Ahmad and Jeff Hawkins},
  journal={Neural Computation},
Abstract The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to… 

Sequence learning, prediction, and replay in networks of spiking neurons

A continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters, facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities.

Memorize-Generalize: An online algorithm for learning higher-order sequential structure with cloned Hidden Markov Models

It is demonstrated that CHMMs trained with the memorize-generalize algorithm can model long-range structure in bird songs with only a slight degradation in performance compared to expectation-maximization, while still outperforming other representations.

A comparative study of HTM and other neural network models for online sequence learning with streaming data

A comparative study of Hierarchical Temporal Memory (HTM), a neurally-inspired model, and other feedforward and recurrent artificial neural network models on both artificial and real-world sequence prediction algorithms shows HTM and long-short term memory (LSTM) give the best prediction accuracy.

Spatiotemporal Sequence Memory for Prediction using Deep Sparse Coding

This work sought to create an artificial model that is able to mimic early, low level biological predictive behavior in a computer vision system using spatiotemporal sequence memories learned from deep sparse coding and is implemented using a biologically inspired architecture.

A New Hierarchical Temporal Memory Algorithm Based on Activation Intensity

A self-adaptive nonlinear learning strategy is proposed where the synaptic connections are dynamically adjusted according to the activation intensity of columns, which achieves higher accuracy and less time overhead than the conventional HTM and LSTM model.

Hierarchical Temporal Memory Introducing Time Axis in Connection Segments

  • Shinichiro NaitoM. Hagiwara
  • Computer Science
    2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS)
  • 2018
An improved Hierarchical Temporal Memory that can consider long-term dependence is proposed and it was confirmed that the proposed model can consider longer-term dependency than the conventional model on temporal sequence prediction.

Using High-Order Prior Belief Predictions in Hierarchical Temporal Memory for Streaming Anomaly Detection

Experimental results suggest that the framework when built upon HTM redefines state-of-the-art performance in a popular streaming anomaly benchmark, and Comparative results with and without the framework on several third-party datasets collected from real-world scenarios show a clear performance benefit.

Deviant Learning Algorithm: Learning Sparse Mismatch Representations through Time and Space

This paper proposes a novel bio-mimetic computational intelligence algorithm – the Deviant Learning Algorithm, inspired by these key ideas and functional properties of recent brain-cognitive discoveries and theories and shows by numerical experiments guided by theoretical insights, how this invention can achieve competitive predictions even with very small problem specific data.

Learning higher-order sequential structure with cloned HMMs

The experiments show that CHMMs can beat n-grams, sequence memoizers, and LSTMs on character-level language modeling tasks and can be a viable alternative to these methods in some tasks that require variable order sequence modeling and the handling of uncertainty.



A Critical Review of Recurrent Neural Networks for Sequence Learning

The goal of this survey is to provide a selfcontained explication of the state of the art of recurrent neural networks together with a historical perspective and references to primary research.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Matching Recall and Storage in Sequence Learning with Spiking Neural Networks

A generic learning rule is derived that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution and is consistent with spike-timing dependent plasticity.

Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting

A model of supervised learning for biologically plausible neurons is presented that enables spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise and shows that the learning rule can also be used for decision-making tasks.

A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks

The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance on benchmark problems drawn from the regression, classification and time series prediction areas.

Sequence to Sequence Learning with Neural Networks

This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

A Mathematical Formalization of Hierarchical Temporal Memory’s Spatial Pooler

This work brings together all aspects of the spatial pooler (SP), a critical learning component in HTM, under a single unifying framework and empirical evidence verifies that given the proper parameterizations, the SP may be used for feature learning.

Learned spatiotemporal sequence recognition and prediction in primary visual cortex

This work discovered that repeated presentations of a visual sequence over a course of days resulted in evoked response potentiation in mouse V1 that was highly specific for stimulus order and timing.

A Learning Algorithm for Continually Running Fully Recurrent Neural Networks

The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal

Sparse Distributed Memory

Pentti Kanerva's Sparse Distributed Memory presents a mathematically elegant theory of human long term memory that resembles the cortex of the cerebellum, and provides an overall perspective on neural systems.