Shigeru Katagiri

Learn More
The minimum classification error (MCE) framework for discriminative training is a simple and general formalism for directly optimizing recognition accuracy in pattern recognition problems. The framework applies directly to the optimization of hidden Markov models (HMMs) used for speech recognition problems. However, few if any studies have reported results(More)
In previous work we reported high classiication rates for Learning Vector Quantization (LVQ) networks trained to classify phoneme tokens shifted in time. It has since been shown that the framework of Minimum Classiication Error (MCE) and Generalized Probabilistic Descent (GPD) can treat LVQ as a special case of a general method for gradient descent on a(More)
Among many speaker adaptation embodiments, Speaker Adaptive Training (SAT) has been successfully applied to a standard Hidden-Markov-Model (HMM) speech recognizer, whose state is associated with Gaussian Mixture Models (GMMs). On the other hand, recent studies on Speaker-Independent (SI) recognizer development have reported that a new type of HMM speech(More)
  • Book Reviews, Spective—i W Sandberg, +12 authors John D. Powell
  • IEEE Transactions on Neural Networks
  • 2004
This book provides an in-depth, comprehensive treatment of artificial learning and adaptive systems of the feedforward neural network type. Chapter topics include an overview and brief history of feedback control, dynamic models, dynamic response, properties of feedback, nonlinear neural networks, and speech recognition. It is a basic reference concerning(More)