Learn More
Behavioral experiments and a connectionist model were used to explore the use of featural representations in the computation of word meaning. The research focused on the role of correlations among features, and differences between speeded and untimed tasks with respect to the use of featural information. The results indicate that featural representations(More)
We measured the timecourse of brightness processing by briefly presenting brightness illusions and then masking them. Brightness induction (brightness contrast) was visible when presented for only 58 ms, was stronger at short presentation times, and its visibility did not depend on spatial frequency. We also found that White's illusion was visible at 82 ms.(More)
We introduce two new low-level computational models of brightness perception that account for a wide range of brightness illusions, including many variations on White's Effect [Perception, 8, 1979, 413]. Our models extend Blakeslee and McCourt's ODOG model [Vision Research, 39, 1999, 4361], which combines multiscale oriented difference-of-Gaussian filters(More)
The role of feature correlations in semantic memory is a central issue in conceptual representation. In two versions of the feature verification task, participants were faster to verify that a feature (< is juicy >) is part of a concept (grapefruit) if it is strongly rather than weakly intercorrelated with the other features of that concept. Contrasting(More)
Humans and other animals learn to form complex categories without receiving a target output, or teaching signal, with each input pattern. In contrast, most computer algorithms that emulate such performance assume the brain is provided with the correct output at the neuronal level or require grossly unphysiological methods of information propagation. Natural(More)
In supervised learning variable selection is used to find a subset of the available inputs that accurately predict the output. This paper shows that some of the variables that variable selection discards can beneficially be used as extra outputs for inductive transfer. Using discarded input variables as extra outputs forces the model to learn mappings from(More)
The functions of sleep have been an enduring mystery. Tononi and Cirelli (2003) hypothesized that one of the functions of slow-wave sleep is to scale down synapses in the cortex that have strengthened during awake learning. We create a computational model to test the functionality of this idea and examine some of its implications. We show that synaptic(More)
Cortically projecting basal forebrain neurons play a critical role in learning and attention, and their degeneration accompanies age-related impairments in cognition. Despite the impressive anatomical and cell-type complexity of this system, currently available data suggest that basal forebrain neurons lack complexity in their response fields, with activity(More)
Various forms of the self-organizing map (SOM) have been proposed as models of cortical development [Choe Y., Miikkulainen R., (2004). Contour integration and segmentation with self-organized lateral connections. Biological Cybernetics, 90, 75-88; Kohonen T., (2001). Self-organizing maps (3rd ed.). Springer; Sirosh J., Miikkulainen R., (1997). Topographic(More)