Data Set Used
Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize… (More)
Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and… (More)
Deep neural networks have recently become the gold standard for acoustic modeling in speech recognition systems. The key computational unit of a deep network is a linear projection followed by a point-wise non-linearity, which is typically a logistic function. In this work, we show that we can improve generalization and make training of deep networks faster… (More)
This paper presents a novel Bayesian approach to the problem of co-channel speech. The problem is formulated as the joint maximization of the a posteriori probability of the word sequence and the target speaker given the observed speech signal. It is shown that the joint probability can be expressed as the product of six terms: a likelihood score from a… (More)
Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation of memoryless sources are presented and used to motivate a design technique for trellis-encoded source coding and rate-constrained simulation. The code structure has intuitive similarities to classic random coding arguments as… (More)
Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation are presented and applied to a design technique for trellis-encoded source coding and rate constrained simulation of memoryless sources.
I'm a first year PhD student in EECS working with Pieter Abbeel and Ken Goldberg. My research interests lie in using vision to model the dynamics of the environment, including a robot's own motions. As far as parallel computers go, I'm interested in speeding robotic perception and planning up though parallelization across multiple cores or even leveraging… (More)