Anthony J. Bell

Learn More
We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989).(More)
It has previously been suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and it has been reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual(More)
We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is de ned here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker(More)
Given an n × 1 random vector X, independent component analysis (ICA) consists of finding a basis of Rn on which the coefficients of X are as independent as possible (in some appropriate sense). The change of basis can be represented by an n × n matrix B and the new coefficients given by the entries of vector Y = BX. When the observation vector X is modeled(More)
Because of the distance between the skull and brain and their different resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve significant time delays, however, suggesting that the Independent(More)
In 1955, McGill published a multivariate generalisation of Shannon’s mutual information. Algorithms such as Independent Component Analysis use a different generalisation, the redundancy, or multi-information [13]. McGill’s concept expresses the information shared by all of K random variables, while the multi-information expresses the information shared by(More)
The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving(More)
K e y w o r d s ~ l i n d source separation, ICA, Entropy, Information maximization, Maximum likelihood estimation. Lee was supported by the Office of Naval Research. Girolami was supported by a grant from NCR Financial Systems (Ltd), Knowledge Laboratory, Advanced Technology Development Division, Dundee, Scotland. Bell and Sejnowski were supported by the(More)
We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated(More)