#### Filter Results:

#### Publication Year

1991

2007

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Key Phrases

#### Method

Learn More

- A Utsugi
- 1997

A topology-selection method for self-organizing maps (SOMs) based on empirical Bayesian inference is presented. This method is natural extension of the hyperparameter-selection method presented earlier, in which the SOM algorithm is regarded as an estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior on the centroid parameters,… (More)

The self-organizing map (SOM) algorithm for finite data is derived as an approximate MAP estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM). For this model, objective criteria for selecting hyperparam-eters are obtained on the basis of empirical Bayesian estimation… (More)

For Bayesian inference on the mixture of factor analyzers, natural conjugate priors on the parameters are introduced, and then a Gibbs sampler that generates parameter samples following the posterior is constructed. In addition, a deterministic estimation algorithm is derived by taking modes instead of samples from the conditional posteriors used in the… (More)

In the statistical approach for self-organizing maps (SOMs), learning is regarded as an estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior on the centroid parameters. The values of the hyperparameters and the topological structure are selected on the basis of a statistical principle. However, since the component selection… (More)

Generative topographic mapping (GTM) is a statistical model to extract a hidden smooth manifold from data, like the self-organizing map (SOM). Although a deterministic search algorithm for the hyperparameters regulating the smoothness of the manifold has been proposed previously, it is based on approximations that are valid only on abundant data. Thus, it… (More)

In this paper, the ensemble of independent factor analyzers (EIFA) is proposed. This new statistical model assumes that each data point is generated by the sum of outputs of independently activated factor analyzers. A maximum likelihood (ML) estimation algorithm for the parameters is derived using a Monte Carlo EM algorithm with a Gibbs sampler. The EIFA… (More)

We proposed to cluster a set of magnetoencephalogram (MEG) records using mixture of factor analyzers to remove those outlying records that were contaminated with artifacts. We showed the eeectiveness of the proposed clustering approach by applying it to visual and auditory evoked MEG data.

A generalized ICA model allowing overcomplete bases and additive noises in the observables is applied to natural image data. It is well known that such a model produces independent components that resemble simple cells in primary visual cortex or Gabor functions. We adopt a variable-sparsity density on each independent component, given by the mixture of a… (More)