• Corpus ID: 4969733

Nonlinear Systems Identification Using Deep Dynamic Neural Networks

@article{Ogunmolu2016NonlinearSI,
  title={Nonlinear Systems Identification Using Deep Dynamic Neural Networks},
  author={Olalekan P. Ogunmolu and Xuejun Gu and Steve B. Jiang and Nicholas R. Gans},
  journal={ArXiv},
  year={2016},
  volume={abs/1610.01439}
}
Neural networks are known to be effective function approximators. [] Key Result We demonstrate that deep neural networks are effective model estimators from input-output data

Figures and Tables from this paper

Nonlinear system identification using a recurrent network in a Bayesian framework
TLDR
This work stacked the recurrent neural network with a probabilistic layer, decomposing the nonlinear dynamic model into a combination of flexible functions retaining a Bayesian framework, and deployed a scalable technique based on Variational Inference to deal with the exact inference intractability.
System Identification Through Lipschitz Regularized Deep Neural Networks
Learning System Dynamics via Deep Recurrent and Conditional Neural Systems
TLDR
Both LSTM-based recurrentdeep learning method and CNMP-based conditional deep learning method were used to learn the system dynamics of the selected system using time series data.
Neural Ordinary Differential Equations for Nonlinear System Identification
TLDR
The experiments show that NODEs can consistently improve the prediction accuracy by an order of magnitude compared to benchmark methods, and are less sensitive to hyperparameters compared to neural statespace models.
Recurrent neural network-based Internal Model Control of unknown nonlinear stable systems
TLDR
This paper aims to discuss how gated Recurrent Neural Networks can be adopted for the synthesis of Internal Model Control (IMC) architectures, using a first gated RNN to learn a model of the unknown input-output stable plant.
Modeling Dynamic Systems for Multi-Step Prediction with Recurrent Neural Networks
TLDR
It is shown that the RNN state initialization problem can be addressed by creating and training an initialization network jointly with the multi-step prediction network, and the combination can be used in a black-box modeling approach such that the model produces predictions which are immediately accurate.
Dynamical System Parameter Identification using Deep Recurrent Cell Networks
TLDR
This study’s results show that bidirectional gated recurrent cells (BiLSTMs) provide better parameter identification results when compared to unidirectionalgated recurrent memory cells such as GRUs and LSTM, indicating that an input/output sequence pair of finite length, collected from a dynamical system and when observed anachronistically, may carry information in both time directions for prediction of a dynamicals systems parameter.
On Using Gated Recurrent Units for Nonlinear System Identification
TLDR
The purpose of this paper is to evaluate the architectures of recurrent gated units from the viewpoint of system identification and test their performance on a nonlinear system identification task.
On the stability properties of Gated Recurrent Units neural networks
...
...

References

SHOWING 1-10 OF 25 REFERENCES
Neural networks and dynamical systems
A fully automated recurrent neural network for unknown dynamic system identification and control
This paper presents a fully automated recurrent neural network (FARNN) that is capable of self-structuring its network in a minimal representation with satisfactory performance for unknown dynamic
Dynamic neural network-based robust identification and control of a class of nonlinear systems
TLDR
A methodology for dynamic neural network identification-based control of nonlinear systems is proposed and new weight update laws for the DNN are proposed which guarantee asymptotic regulation of the identification error to zero.
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Training recurrent neural networks
TLDR
A new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs is described, more powerful than similar models while being less difficult to train, and a random parameter initialization scheme is described that allows gradient descent with momentum to train Rnns on problems with long-term dependencies.
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Speech recognition with deep recurrent neural networks
TLDR
This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs.
On rectified linear units for speech processing
TLDR
This work shows that it can improve generalization and make training of deep networks faster and simpler by substituting the logistic units with rectified linear units.
On the approximate realization of continuous mappings by neural networks
Recurrent Neural Network Regularization
TLDR
This paper shows how to correctly apply dropout to LSTMs, and shows that it substantially reduces overfitting on a variety of tasks.
...
...