Learn More
Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize(More)
Deep neural networks have recently become the gold standard for acoustic modeling in speech recognition systems. The key computational unit of a deep network is a linear projection followed by a point-wise non-linearity, which is typically a logistic function. In this work, we show that we can improve generalization and make training of deep networks faster(More)
This paper presents a novel Bayesian approach to the problem of co-channel speech. The problem is formulated as the joint maximization of the a posteriori probability of the word sequence and the target speaker given the observed speech signal. It is shown that the joint probability can be expressed as the product of six terms: a likelihood score from a(More)
Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation of memoryless sources are presented and used to motivate a design technique for trellis-encoded source coding and rate-constrained simulation. The code structure has intuitive similarities to classic random coding arguments as(More)
The in vivo effects of a single dose of levo-praziquantel, 75 mg/kg in PEG 400, on the tegumental surface of adult S. japonicum were compared with the effects of a single dose (150 mg/kg) of the mixed isomer preparation, using scanning and transmission electron microscope. Worms were recovered from mice at 10 min, 30 min, 1 hr, 4 hr, 12 hr, 24 hr and 48 hr(More)