Extracting Finite Structure from Infinite Language

  title={Extracting Finite Structure from Infinite Language},
  author={T. McQueen and Adrian A. Hopgood and Tony J. Allen and Jonathan A. Tepper},
  booktitle={SGAI Conf.},
This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map [1] with laterally interconnected neurons. A derivation of functional-equivalence theory [2] is used that allows the model to exploit similarities between the fixture context of previously memorized sequences and the future context of the current input sequence… 
Segmentation of DNA using simple recurrent neural network
Application of the NOK method in sentence modelling
This application is based on sentences from an Aesop's Fable in Croatian and English, and similarities and differences that are partially conditioned by freedom of translators, and not only by differences in the syntax of the two languages are observed.
on Intelligent Manufacturing and Automation , DAAAM 2014 Homonyms and Synonyms in NOK Method
This paper presents a metamodel mode of writing homonyms and synonyms into the dictionary from which words are used for modeling, and demonstrates that the NOK method can be applied to modeling dictionaries of natural languages.


Finite State Automata and Simple Recurrent Networks
A network architecture introduced by Elman (1988) for predicting successive elements of a sequence and shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.
A Recurrent Self-Organizing Map for Temporal Sequence Processing
An unsupervised, recurrent neural network based on a self- organizing map that has been applied to the difficult natural language processing problem of position vari- ant recognition, e.g. recognising a noun phrase regardless of its position within a sentence.
Fool's Gold: Extracting Finite State Machines from Recurrent Network Dynamics
How sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions is described.
Natural Language Grammatical Inference with Recurrent Neural Networks
It was found that certain architectures are better able to learn an appropriate grammar than others, and the extraction of rules in the form of deterministic finite state automata is investigated.
Finding Structure in Time
A proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory and suggests a method for representing lexical categories and the type/token distinction is developed.
Recursive self-organizing maps
Learning long-term dependencies with gradient descent is difficult
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Models of Language Acquisition: Inductive and Deductive Approaches
This book presents an Output-as-Input Hypothesis for Language Acquisition: Arguments, Model, Evidence and a Cross-Linguistic Comparison of Single and Dual-Route Models of Inflectional Morphology.
Field Guide to Dynamical Recurrent Networks
This book presents the range of dynamical recurrent network (DRN) architectures that will be used in the book and transforms the text from a collection of papers into a coherent book.
An historical overview of natural language processing systems that learn
  • R. Collier
  • Computer Science
    Artificial Intelligence Review
  • 2004
A range of the systems that have been developed in the domain of machine learning and natural language processing are referenced and overviews, and each system is categorised into either a symbolic or connectionist paradigm, and has its own characteristics and limitations described.