BackPropagation Through Time
@inproceedings{Guo2013BackPropagationTT, title={BackPropagation Through Time}, author={Jiang Guo}, year={2013} }
This report provides detailed description and necessary derivations for the BackPropagation Through Time (BPTT) algorithm. BPTT is often used to learn recurrent neural networks (RNN). Contrary to feed-forward neural networks, the RNN is characterized by the ability of encoding longer past information, thus very suitable for sequential models. The BPTT extends the ordinary BP algorithm to suit the recurrent neural architecture. 1 Basic Definitions For a two-layer feed-forward neural network, we… CONTINUE READING
22 Citations
On the use of phone-gram units in recurrent neural networks for language identification
- Computer Science
- Odyssey
- 2016
- 6
- PDF
Suitable Recurrent Neural Network for Air Quality Prediction With Backpropagation Through Time
- Computer Science
- 2018 2nd International Conference on Informatics and Computational Sciences (ICICoS)
- 2018
MODIFICATION AND PARALLELIZATION OF GENETIC ALGORITHM FOR SYNTHESIS OF ARTIFICIAL NEURAL NETWORKS
- 2019
- PDF
The implementation of a Deep Recurrent Neural Network Language Model on a Xilinx FPGA
- Computer Science
- ArXiv
- 2017
- 7
- PDF
Prediction of Li-Ion Battery State of Charge Using Multilayer Perceptron and Long Short-Term Memory Models
- Computer Science
- 2019 IEEE Transportation Electrification Conference and Expo (ITEC)
- 2019
- 17
Multilayer Long Short-Term Memory (LSTM) Neural Networks in Time Series Analysis
- Computer Science
- 2020 55th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST)
- 2020
Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks
- Computer Science, Mathematics
- Robotics Auton. Syst.
- 2018
- 27
- PDF
Fuel Price Prediction Using RNN
- Economics, Computer Science
- 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
- 2018
References
SHOWING 1-2 OF 2 REFERENCES
Learning task-dependent distributed representations by backpropagation through structure
- Computer Science
- Proceedings of International Conference on Neural Networks (ICNN'96)
- 1996
- 501
- PDF