• Corpus ID: 9994916

BackPropagation Through Time

  title={BackPropagation Through Time},
  author={Jiang Guo},
This report provides detailed description and necessary derivations for the BackPropagation Through Time (BPTT) algorithm. BPTT is often used to learn recurrent neural networks (RNN). Contrary to feed-forward neural networks, the RNN is characterized by the ability of encoding longer past information, thus very suitable for sequential models. The BPTT extends the ordinary BP algorithm to suit the recurrent neural architecture. 1 Basic Definitions For a two-layer feed-forward neural network, we… 

Figures and Tables from this paper

Non-Autoregressive vs Autoregressive Neural Networks for System Identification
Comparisons with other state-of-the-art black-box system identification methods show, that the implementation of the non-autoregressive GRU is the best performing neural network-based system identification method, and in the benchmarks without extrapolation, the bestperforming black- box method.
Artificial neural networks for artificial intelligence
Artificial neural networks now have a long history as major techniques in computational intelligence with a wide range of applications for learning from data. There are many methods developed and
Long Short-term Memory RNN
This paper introduces an LSTM cell’s architecture and explains how different components go together to alter the cell's memory and predict the output, and provides the necessary formulas and foundations to calculate a forward iteration through an L STM.
On the use of phone-gram units in recurrent neural networks for language identification
The phonotactic system performs ~13% better than the unigram-based RNNLM system and is calibrated and fused with other scores f rom an acoustic-based i-vector system and a traditional PP RLM system showing that they provide complementary information t the LID system.
Evolving and Spiking Connectionist Systems for Brain-Inspired Artificial Intelligence
  • N. Kasabov
  • Computer Science
    Artificial Intelligence in the Age of Neural Networks and Brain Computing
  • 2019
This chapter starts with a brief review of AI methods, from Aristotle's logic to the classical artificial neural networks (ANN) and hybrid systems that are used for AI now, and concludes that knowing and combining various methods of AI and ANN and getting more inspiration from neuroscience to create new methods is the way for future research.
Suitable Recurrent Neural Network for Air Quality Prediction With Backpropagation Through Time
The BPTT algorithm is applied by comparing Elman RNN, Jordan RNN and hybrid network architecture to predict the time series data of air pollutant concentration in determining air quality to determine future air quality conditions that are good or bad for health and the environment.
Rapid phase-resolved prediction of nonlinear dispersive waves using machine learning
In this paper, we show that a revised convolutional recurrent neural network (CRNN) can decrease, by orders of magnitude, the time needed for the phase-resolved prediction of waves in a
Context. The problem of automation synthesis of artificial neural networks for further use in diagnosing, forecasting and pattern recognition is solved. The object of the study was the process of
Term Extraction as Sequence Labeling Task using Recurrent Neural Networks
Terminology extraction is mostly performed in two steps. In the first step, the text is filtered in order to identify word groups, which fit predefined syntactic patterns. In the second step, these
The implementation of a Deep Recurrent Neural Network Language Model on a Xilinx FPGA
It is found that the DRNN language model can be deployed on the embedded system smoothly and the Overlay accelerator with AXI Stream interface performs at 20 GOPS processing throughput, which constitutes a 70.5X and 2.75X speed up compared to the work in Ref.30 and Ref.31 respectively.


Recurrent neural network based language model
Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model.
Learning task-dependent distributed representations by backpropagation through structure
  • C. Goller, A. Küchler
  • Computer Science
    Proceedings of International Conference on Neural Networks (ICNN'96)
  • 1996
A connectionist architecture together with a novel supervised learning scheme which is capable of solving inductive inference tasks on complex symbolic structures of arbitrary size is presented.