Learn More
All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Consequently, plausible models of human cognition should exhibit similar patterns of gradual forgetting old information as new information is acquired. Only rarely (see Box 3) does new learning in natural cognitive systems completely disrupt or erase(More)
Introduction Our ability to see a particular object or situation in one context as being " the same as " another object or situation in another context is the essence of analogy-making. It encompasses our ability to explain new concepts in terms of already-familiar ones, to emphasize particular aspects of situations, to generalize, to characterize(More)
In order to solve the " sensitivity-stability " problem — and its immediate correlate, the problem of sequential learning — it is crucial to develop connectionist architectures that are simultaneously sensitive to, but not excessively disrupted by, new input. French (1992) suggested that to alleviate a particularly severe form of this disruption,(More)
High-level perception-the process of making sense of complex data at an abstract, conceptual levelCis fundamental to human cognition.Through high-level perception, chaotic environmental stimuli are organized into the mental representations that are used throughout cognitive processing.Much work in traditional artificial intelligence has ignored the process(More)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces.  For all intents and purposes, modeling is still done today as(More)
No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. We show that the use of " subcognitive " questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing(More)
In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct(More)
Emotion is central to human interactions, and automatic detection could enhance our experience with technologies. We investigate the linguistic expression of fine-grained emotion in 50 and 200 word samples of real blog texts previously coded by expert and naive raters. Content analysis (LIWC) reveals angry authors use more affective language and negative(More)
Being able to automatically perceive a variety of emotions from text alone has potentially important applications in CMC and HCI that range from identifying mood from online posts to enabling dynamically adaptive interfaces. However, such ability has not been proven in human raters or computational systems. Here we examine the ability of naive raters of(More)
1 OVERVIEW Catastrophic forgetting occurs when connectionist networks learn new information, and by so doing, forget all previously learned information. This workshop focused primarily on the causes of catastrophic interference, the techniques that have been developed to reduce it, the effect of these techniques on the networks' ability to generalize , and(More)