Learn More
Humans and other animals learn to form complex categories without receiving a target output, or teaching signal, with each input pattern. In contrast, most computer algorithms that emulate such performance assume the brain is provided with the correct output at the neuronal level or require grossly unphysiological methods of information propagation. Natural(More)
The functions of sleep have been an enduring mystery. Tononi and Cirelli (2003) hypothesized that one of the functions of slow-wave sleep is to scale down synapses in the cortex that have strengthened during awake learning. We create a computational model to test the functionality of this idea and examine some of its implications. We show that synaptic(More)
Cortically projecting basal forebrain neurons play a critical role in learning and attention, and their degeneration accompanies age-related impairments in cognition. Despite the impressive anatomical and cell-type complexity of this system, currently available data suggest that basal forebrain neurons lack complexity in their response fields, with activity(More)
In supervised learning variable selection is used to find a subset of the available inputs that accurately predict the output. This paper shows that some of the variables that variable selection discards can beneficially be used as extra outputs for inductive transfer. Using discarded input variables as extra outputs forces the model to learn mappings from(More)
A brain-computer interface (BCI) is a system which allows direct translation of brain states into actions, bypassing the usual muscular pathways. A BCI system works by extracting user brain signals, applying machine learning algorithms to classify the user's brain state, and performing a computer-controlled action. Our goal is to improve brain state(More)
One of the advantages of supervised learning is that the final error metric is available during training. For classifiers, the algorithm can directly reduce the number of misclassifications on the training set. Unfortunately , when modeling human learning or constructing classifiers for autonomous robots, supervisory labels are often not available or too(More)
Various forms of the self-organizing map (SOM) have been proposed as models of cortical development [Choe Y., Miikkulainen R., (2004). Contour integration and segmentation with self-organized lateral connections. Biological Cybernetics, 90, 75-88; Kohonen T., (2001). Self-organizing maps (3rd ed.). Springer; Sirosh J., Miikkulainen R., (1997). Topographic(More)
In supervised learning there is usually a clear distinction between inputs and outputs-inputs are what you will measure, outputs are what you will predict from those measurements. This paper shows that the distinction between inputs and outputs is not this simple. Some features are more useful as extra outputs than as inputs. By using a feature as an output(More)