Finding Structure in Time

@article{Elman1990FindingSI,
  title={Finding Structure in Time},
  author={Jeffrey L. Elman},
  journal={Cogn. Sci.},
  year={1990},
  volume={14},
  pages={179-211}
}
  • J. Elman
  • Published 1 March 1990
  • Psychology
  • Cogn. Sci.
Time underlies many interesting human behaviors. [] Key Method In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory…
Incremental sequence learning
TLDR
An alternative model based on Maskara & Noetzel's (1991) Auto-Associative Recurrent Network is suggested as a way to overcome the SRN model’s failure to account for human performance in several experimental situations meant to test the model's specific predictions.
Doing Without Schema Hierarchies : A Recurrent Connectionist Approach to Routine Sequential Action and Its Pathologies
TLDR
This work considers an alternative framework in which the representation of temporal context depends on learned, recurrent connections within a network that maps from environmental inputs to actions, and indicates that recurrent connectionist models offer a useful framework for understanding routine sequential action.
Latent Attractors: A General Paradigm for Context-Dependent Neural Computation
TLDR
This chapter describes an approach called latent attractors that allows self-organizing neural systems to simultaneously incorporate both Type I and Type II context dependency, and argues that the latent attractor approach is a general and flexible method for incorporating multi-scale temporal dependence into neural systems, and possibly other self-organized systems.
What connectionist models learn: Learning and representation in connectionist networks
TLDR
It is shown that connectionist models can be used to explore systematically the complex interaction between learning and representation, as it is demonstrated through the analysis of several large networks.
Representation of Temporal Patternsin Recurrent Networks
TLDR
This paper investigates the way temporal patterns are represented by recurrent networks by establishing the relationship between the temporal characteristics of the training set and the representations developed by the network.
Recurrent Neural Networks for Temporal Sequences Recognition
TLDR
This report presents various tasks that are based on temporal pattern processing and the different neural network architectures, simulated to tackle the problem.
Toward a connectionist model of recursion in human linguistic performance
TLDR
This work suggests a novel explanation of people’s limited recursive performance, without assuming the existence of a mentally represented competence grammar allowing unbounded recursion.
The representation of structure in sequence prediction tasks
TLDR
Simulation studies in which connectionist networks are trained to predict the last event of the sequences in the same conditions as subjects were are presented, and it is suggested that the kind of representations developed by connectionist models are intermediate between abstract representations and exemplar-based representations, and that these two extreme forms of representation are points on a continuum.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 111 REFERENCES
Learning Subsequential Structure in Simple Recurrent Networks
TLDR
A network architecture introduced by Elman (1988) for predicting successive elements of a sequence using the pattern of activation over a set of hidden units to be illustrated with cluster analyses performed at different points during training.
On the proper treatment of connectionism
Abstract A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive
Distributed Representations
TLDR
This report describes a different type of representation that is less familiar and harder to think about than local representations, which makes use of the processing abilities of networks of simple, neuron-like computing elements.
Parallel Networks that Learn to Pronounce
TLDR
H hierarchical clustering techniques applied to NETtalk suggest that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units, which suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.
Parallel Networks that Learn to Pronounce English Text
TLDR
H hierarchical clustering techniques applied to NETtalk reveal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units, which suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.
A Dynamical Approach to Temporal Pattern Processing
TLDR
This work proposes an architecture in which time serves as its own representation, and temporal context is encoded in the state of the nodes, and contrasts this with the approach of replicating portions of the architecture to represent time.
Neural computation by concentrating information in time.
  • D. Tank, J. Hopfield
  • Computer Science
    Proceedings of the National Academy of Sciences of the United States of America
  • 1987
TLDR
An analog model neural network that can solve a general problem of recognizing patterns in a time-dependent signal is presented and can be understood from consideration of an energy function that is being minimized as the circuit computes.
The temporal structure of spoken language understanding
Learning the hidden structure of speech.
  • J. Elman, D. Zipser
  • Computer Science
    The Journal of the Acoustical Society of America
  • 1988
TLDR
The results of these studies demonstrate that backpropagation learning can be used with complex, natural data to identify a feature structure that can serve as the basis for both analysis and nontrivial pattern recognition.
Spoken word recognition processes and the gating paradigm
TLDR
Words varying in length (one, two, and three syllables) and in frequency (high and low) were presented to subjects in isolation, in a short context, and in a long context to study more closely the narrowing-in process employed by listeners in the isolation and recognition of words.
...
1
2
3
4
5
...