Learn More
All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Consequently, plausible models of human cognition should exhibit similar patterns of gradual forgetting old information as new information is acquired. Only rarely (see Box 3) does new learning in natural cognitive systems completely disrupt or erase(More)
In order to solve the " sensitivity-stability " problem — and its immediate correlate, the problem of sequential learning — it is crucial to develop connectionist architectures that are simultaneously sensitive to, but not excessively disrupted by, new input. French (1992) suggested that to alleviate a particularly severe form of this disruption,(More)
Individuals of all ages extract structure from the sequences of patterns they encounter in their environment, an ability that is at the very heart of cognition. Exactly what underlies this ability has been the subject of much debate over the years. A novel mechanism, implicit chunk recognition (ICR), is proposed for sequence segmentation and chunk(More)
Introduction Our ability to see a particular object or situation in one context as being " the same as " another object or situation in another context is the essence of analogy-making. It encompasses our ability to explain new concepts in terms of already-familiar ones, to emphasize particular aspects of situations, to generalize, to characterize(More)
No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. We show that the use of " subcognitive " questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing(More)
High-level perception-the process of making sense of complex data at an abstract, conceptual levelCis fundamental to human cognition.Through high-level perception, chaotic environmental stimuli are organized into the mental representations that are used throughout cognitive processing.Much work in traditional artificial intelligence has ignored the process(More)
In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct(More)
Disentangling bottom-up and top-down processing in adult category learning is notoriously difficult. Studying category learning in infancy provides a simple way of exploring category learning while minimizing the contribution of top-down information. Three- to 4-month-old infants presented with cat or dog images will form a perceptual category(More)
It is well known that when a connectionist network is trained on one set of patterns and then attempts to add new patterns to its repertoire, catastrophic interference may result. The use of sparse, orthogonal hidden-layer representations has been shown to reduce catastrophic interference. The author demonstrates that the use of sparse representations may,(More)