Learn More
In this paper we introduce a new technique for blind source separation of speech signals. We focus on the temporal structure of the signals in contrast to most other major approaches to this problem. The idea is to apply the decorrelation method proposed by Molgedey and Schuster in the time-frequency domain. We show some results of experiments with both(More)
The problem of model selection, or determination of the number of hidden units, can be approached statistically, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization error is(More)
A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback-Leibler divergence in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the(More)
The problem of model selection or determination of the number of hidden units is elucidated by the statistical approach, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization(More)
We aim at an extension of AdaBoost to U-Boost, in the paradigm to build a stronger classification machine from a set of weak learning machines. A geometric understanding of the Bregman divergence defined by a generic convex function U leads to the U-Boost method in the framework of information geometry extended to the space of the finite measures over a(More)
Learning is a flexible and effective means of extracting the stochastic structure of the environment. It provides an effective method for blind separation and deconvolution in signal processing. Two different types of learning are used, namely batch learning and on-line learning. The batch learning procedure uses all the training examples repeatedly so that(More)
We propose a method of ICA for separating convolu-tive mixtures of acoustic signals. The acoustic signals recorded in a real environment are not instantaneous but convolutive mixtures, because of the delay and the reflections. In order to separate these signals, it is effective to transform the signals into time-frequency domain. The difficult point in(More)
1 Support Vector Learning Machines (SVLM) have become an emerging technique which has proven successful in many traditionally neural network dominated applications. This is also the case for Regression Estimation (RE). In particular we are able to construct spline approximations of given data independently from the number of input-dimensions regarding(More)