Learn More
Deep neural networks have recently become the gold standard for acoustic modeling in speech recognition systems. The key computational unit of a deep network is a linear projection followed by a point-wise non-linearity, which is typically a logistic function. In this work, we show that we can improve generalization and make training of deep networks faster(More)
We recently showed that Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform state-of-the-art deep neural networks (DNNs) for large scale acoustic modeling where the models were trained with the cross-entropy (CE) criterion. It has also been shown that sequence discriminative training of DNNs initially trained with the CE criterion(More)
A vector extension of a necessary condition for asymptotically optimal stationary (sliding-block) source codes is presented. The condition implies the intuitive result that the reproduction process for an IID input must be approximately uncorrelated if the code is approximately optimal, a property previously demonstrated empirically for common examples.(More)
The in vivo effects of a single dose of levo-praziquantel, 75 mg/kg in PEG 400, on the tegumental surface of adult S. japonicum were compared with the effects of a single dose (150 mg/kg) of the mixed isomer preparation, using scanning and transmission electron microscope. Worms were recovered from mice at 10 min, 30 min, 1 hr, 4 hr, 12 hr, 24 hr and 48 hr(More)
This paper presents a novel Bayesian approach to the problem of co-channel speech. The problem is formulated as the joint maximization of the a posteriori probability of the word sequence and the target speaker given the observed speech signal. It is shown that the joint probability can be expressed as the product of six terms: a likelihood score from a(More)
Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation of memoryless sources are presented and used to motivate a design technique for trellis-encoded source coding and rate-constrained simulation. The code structure has intuitive similarities to classic random coding arguments as(More)