#### Filter Results:

- Full text PDF available (75)

#### Publication Year

1992

2017

- This year (3)
- Last five years (40)

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Key Phrases

#### Method

Learn More

- Noboru Murata, Shiro Ikeda, Andreas Ziehe
- Neurocomputing
- 2001

In this paper we introduce a new technique for blind source separation of speech signals. We focus on the temporal structure of the signals in contrast to most other major approaches to this problem. The idea is to apply the decorrelation method proposed by Molgedey and Schuster in the time-frequency domain. We show some results of experiments with both… (More)

- Noboru Murata
- 1998

In this paper we examine on-line learning with statistical framework. Firstly we study the cases with xed and annealed learning rate. It can be shown that on-line learning with 1=t annealed learning rate minimizes the generalization error with the same rate as batch learning in the asymp-totic regime, that is, on-line learning can be as eeective as batch… (More)

- Noboru Murata, Shuji Yoshizawa, Shun-ichi Amari
- IEEE Trans. Neural Networks
- 1994

The problem of model selection, or determination of the number of hidden units, can be approached statistically, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization error is… (More)

- Shun-ichi Amari, Noboru Murata, Klaus-Robert Müller, Michael Finke, Howard Hua Yang
- IEEE Trans. Neural Networks
- 1997

A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback-Leibler divergence in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the… (More)

- Alexander Smola, Chris Burges, +8 authors Charles Stenard
- 1996

1 Support Vector Learning Machines (SVLM) have become an emerging technique which has proven successful in many traditionally neural network dominated applications. This is also the case for Regression Estimation (RE). In particular we are able to construct spline approximations of given data independently from the number of input-dimensions regarding… (More)

The problem of model selection or determination of the number of hidden units is elucidated by the statistical approach, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization… (More)

- Noboru Murata, Takashi Takenouchi, Takafumi Kanamori, Shinto Eguchi
- Neural Computation
- 2004

We aim at an extension of AdaBoost to U-Boost, in the paradigm to build a stronger classification machine from a set of weak learning machines. A geometric understanding of the Bregman divergence defined by a generic convex function U leads to the U-Boost method in the framework of information geometry extended to the space of the finite measures over a… (More)

- Noboru Murata, Shiro Ikeda
- 1998

| In this article, we propose an on-line algorithm for Blind Source Separation of speech signals , which is recorded in a real environment. This on-line algorithm makes it possible to trace the changing environment. The idea is to apply some on-line algorithm in the time-frequency domain. We show some results of experiments.

- Noboru Murata, Shun-ichi Amari
- Signal Processing
- 1999

Learning is a flexible and effective means of extracting the stochastic structure of the environment. It provides an effective method for blind separation and deconvolution in signal processing. Two different types of learning are used, namely batch learning and on-line learning. The batch learning procedure uses all the training examples repeatedly so that… (More)

- Toshinao Akuzawa, Noboru Murata
- ArXiv
- 1999

We construct new algorithms from scratch, which use the fourth order cumulant of stochastic variables for the cost function. The multiplicative updating rule here constructed is natural from the homogeneous nature of the Lie group and has numerous merits for the rigorous treatment of the dynamics. As one consequence, the second order convergence is shown.… (More)