Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks
@article{Giles1992LearningAE, title={Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks}, author={C. Lee Giles and Clifford B. Miller and Dong Chen and Hsing-Hen Chen and Guo-Zheng Sun and Yee-Chun Lee}, journal={Neural Computation}, year={1992}, volume={4}, pages={393-405} }
We show that a recurrent, second-order neural network using a real-time, forward training algorithm readily learns to infer small regular grammars from positive and negative string training samples. We present simulations that show the effect of initial conditions, training set size and order, and neural network architecture. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. We discuss a quantization…
489 Citations
First-Order Recurrent Neural Networks and Deterministic Finite State Automata
- Computer ScienceNeural Computation
- 1994
The correspondence between first-order recurrent neural networks and deterministic finite state automata is examined in detail, showing two major stages in the learning process and a measure based on clustering that is correlated to the stability of the networks.
Constrained Second-Order Recurrent Networks for Finite-State Automata Induction
- Computer Science
- 1998
A simple modification to the standard error function for second-order dynamical recurrent networks is suggested which encourages these networks to assume natural FSA encodings when trained using gradient descent and provides a simple method for guaranteeing the stability of the network for arbitrarily long sequences.
Discrete recurrent neural networks for grammatical inference
- Computer ScienceIEEE Trans. Neural Networks
- 1994
A novel neural architecture for learning deterministic context-free grammars, or equivalently, deterministic pushdown automata is described, and a composite error function is described to handle the different situations encountered in learning.
Rule Revision With Recurrent Neural Networks
- Computer ScienceIEEE Trans. Knowl. Data Eng.
- 1996
The results from training a recurrent neural network to recognize a known non-trivial, randomly-generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect.
Extracting and Learning an Unknown Grammar with Recurrent Neural Networks
- Computer ScienceNIPS
- 1991
Simple second-order recurrent networks are shown to readily learn small known regular grammars when trained with positive and negative strings examples. We show that similar methods are appropriate…
Dynamic Adaptation of Recurrent Neural Network Architectures Guided by Symbolic Knowledge
- Computer Science
- 1999
This work proposes a novel method for dynamically adapting the architecture of recurrent neural networks trained to behave like determinis-tic nite-state automata (DFAs), which relies on the continuous extraction and insertion of symbolic knowledge in the form of DFAs.
Representation and Recognition of Regular Grammars by Means of Second-Order Recurrent Neural Networks
- Computer ScienceIWANN
- 1993
This work addresses and solves a basic problem, which is, how to build a neural net recognizer for a given regular language specified by a deterministic finite-state automaton, and employs a second-order recurrent network model, which allows to formulate the problem as one of solving a linear system of equations.
Learning a class of large finite state machines with a recurrent neural network
- Computer ScienceNeural Networks
- 1995
Extraction of rules from discrete-time recurrent neural networks
- Computer ScienceNeural Networks
- 1996
Learning Finite State Machines With Self-Clustering Recurrent Networks
- Computer ScienceNeural Computation
- 1993
This paper proposes a new method to force a recurrent neural network to learn stable states by introducing discretization into the network and using a pseudo-gradient learning rule to perform training, which has similar capabilities in learning finite state automata as the original network, but without the instability problem.
References
SHOWING 1-10 OF 31 REFERENCES
Discrete recurrent neural networks for grammatical inference
- Computer ScienceIEEE Trans. Neural Networks
- 1994
A novel neural architecture for learning deterministic context-free grammars, or equivalently, deterministic pushdown automata is described, and a composite error function is described to handle the different situations encountered in learning.
Learning Finite State Machines With Self-Clustering Recurrent Networks
- Computer ScienceNeural Computation
- 1993
This paper proposes a new method to force a recurrent neural network to learn stable states by introducing discretization into the network and using a pseudo-gradient learning rule to perform training, which has similar capabilities in learning finite state automata as the original network, but without the instability problem.
Finite State Automata and Simple Recurrent Networks
- Computer ScienceNeural Computation
- 1989
A network architecture introduced by Elman (1988) for predicting successive elements of a sequence and shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.
Heuristics for the extraction of rules from discrete-time recurrent neural networks
- Computer Science[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
- 1992
Empirical evidence that there exists a correlation between the generalization performance of recurrent neural networks for regular language recognition and the rules that can be extracted from a neural network is presented and a heuristic that makes it possible to extract good rules from trained networks is given.
Higher Order Recurrent Networks and Grammatical Inference
- Computer ScienceNIPS
- 1989
A higher order single layer recursive network easily learns to simulate a deterministic finite state machine and recognize regular grammars and can be interpreted as a neural net pushdown automata.
Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks
- Computer Science
- 1993
This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected.
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
- Computer ScienceNeural Computation
- 1989
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal…
Diversity-based inference of finite automata
- Computer Science28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
- 1987
A new procedure for inferring the structure of a finitestate automaton (FSA) from its input/output behavior, using access to the automaton to perform experiments, based on the notion of equivalence between testa.
Induction of Finite-State Languages Using Second-Order Recurrent Networks
- Computer ScienceNeural Computation
- 1992
Second-order recurrent networks that recognize simple finite state languages over {0,1}* are induced from positive and negative examples to obtain solutions that correctly recognize strings of arbitrary length.
Complexity of Automaton Identification from Given Data
- Computer Science, MathematicsInf. Control.
- 1978