• Corpus ID: 222134113

A fast memoryless predictive algorithm in a chain of recurrent neural networks

@article{Rubinstein2020AFM,
  title={A fast memoryless predictive algorithm in a chain of recurrent neural networks},
  author={Boris Y. Rubinstein},
  journal={arXiv: Dynamical Systems},
  year={2020}
}
  • B. Rubinstein
  • Published 5 October 2020
  • Computer Science
  • arXiv: Dynamical Systems
In the recent publication (arxiv:2007.08063v2 [cs.LG]) a fast prediction algorithm for a single recurrent network (RN) was suggested. In this manuscript we generalize this approach to a chain of RNs and show that it can be implemented in natural neural systems. When the network is used recursively to predict sequence of values the proposed algorithm does not require to store the original input sequence. It increases robustness of the new approach compared to the standard moving/expanding window… 

Figures from this paper

On a novel training algorithm for sequence-to-sequence predictive recurrent networks

A novel memoryless algorithm for seq2seq predictive networks is presented and it is shown that the new algorithm is more robust and makes predictions with higher accuracy than the traditional one.

It's a super deal - train recurrent network on noisy data and get smooth prediction free

An explanation of the observed noise compression in the predictive process is proposed and importance of this property of recurrent networks in the neuroscience context for the evolution of living organisms is discussed.

References

SHOWING 1-6 OF 6 REFERENCES

A fast noise filtering algorithm for time series prediction using recurrent neural networks

A new approximate algorithm is proposed that significantly speeds up the predictive process without loss of accuracy and is based on the internal dynamics of RNNs.

Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Attention is All you Need

A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Language Models are Few-Shot Learners

GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.

Long - short term memory , Neural

  • 1997