Terrence J. Sejnowski

Learn More
We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989).(More)
A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise(More)
Current analytical techniques applied to functional magnetic resonance imaging (fMRI) data require a priori knowledge or specific assumptions about the time courses of processes contributing to the measured signals. Here we describe a new method for analyzing fMRI data based on the independent component analysis (ICA) algorithm of Bell and Sejnowski(More)
Sleep is characterized by synchronized events in billions of synaptically coupled neurons in thalamocortical systems. The activation of a series of neuromodulatory transmitter systems during awakening blocks low-frequency oscillations, induces fast rhythms, and allows the brain to recover full responsiveness. Analysis of cortical and thalamic networks at(More)
It has previously been suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and it has been reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual(More)
The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a problem in o very short time. One kind of computation for which(More)
We develop a theoretical framework that shows how mesencephalic dopamine systems could distribute to their targets a signal that represents information about future expectations. In particular, we show how activity in the cerebral cortex can make predictions about future receipt of reward and how fluctuations in the activity levels of neurons in diffuse(More)
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability(More)
It has been long debated whether averaged electrical responses recorded from the scalp result from stimulus-evoked brain events or stimulus-induced changes in ongoing brain dynamics. In a human visual selective attention task, we show that nontarget event-related potentials were mainly generated by partial stimulus-induced phase resetting of multiple(More)
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time(More)