Learn More
We present a framework of quasi-Bayes (QB) learning of the parameters of the continuous density hidden Markov model (CDHMM) with Gaussian mixture state observation densities. The QB formulation is based on the theory of recursive Bayesian inference. The QB algorithm is designed to incrementally update the hyperparameters of the approximate posterior(More)
Recent advances in automatic speech recognition are accomplished by designing a plug-in maximum a posteriori decision rule such that the forms of the acoustic and language model distributions are specified and the parameters of the assumed distributions are estimated from a collection of speech and language training corpora. Maximum-likelihood point(More)
This paper presents a study of using 8-directional features for online handwritten Chinese character recognition. Given an online handwritten character sample, a series of processing steps, including linear size normalization, adding imaginary strokes, nonlinear shape normalization, equidistance resampling, and smoothing, are performed to derive a 64×64(More)
We present a new scalable approach to using deep neural network (DNN) derived features in Gaussian mixture density hidden Markov model (GMM-HMM) based acoustic modeling for large vocabulary continuous speech recognition (LVCSR). The DNN-based feature extractor is trained from a subset of training data to mitigate the scalability issue of DNN training, while(More)
We introduce a new Bayesian predictive classi cation (BPC) approach to robust speech recognition and apply the BPC framework to Gaussian mixture continuous density hidden Markov model based speech recognition. We propose and focus on one of the approximate BPC approach called quasiBayesian predictive classi cation (QBPC). In comparison with the standard(More)
In a traditional HMM compensation approach to robust speech recognition that uses Vector Taylor Series (VTS) approximation of an explicit model of environmental distortions, the set of generic HMMs are typically trained from “clean” speech only. In this paper, we present a maximum likelihood approach to training generic HMMs from both “clean” and(More)
In this paper a theoretical framework for Bayesian adaptive learning of discrete HMM and semi continuous one with Gaussian mixture state observation densities is presented Corre sponding to the well known Baum Welch and segmental k means algorithms respectively for HMM training formulations of MAP maximum a posteriori and segmental MAP estima tion of HMM(More)