• Publications
  • Influence
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin
TLDR
It is shown that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech-two vastly different languages, and is competitive with the transcription of human workers when benchmarked on standard datasets. Expand
Deep Speech: Scaling up end-to-end speech recognition
TLDR
Deep Speech, a state-of-the-art speech recognition system developed using end-to-end deep learning, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0% error on the full test set. Expand
Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network
TLDR
It is demonstrated that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists. Expand
Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks
We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearableExpand
First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs
TLDR
This paper demonstrates that a straightforward recurrent neural network architecture can achieve a high level of accuracy and proposes and evaluates a modified prefix-search decoding algorithm that enables first-pass speech recognition with a langu age model, completely unaided by the cumbersome infrastructure of HMM-based systems. Expand
Persistent RNNs: Stashing Recurrent Weights On-Chip
TLDR
This paper introduces a new technique for mapping Deep Recurrent Neural Networks efficiently onto GPUs that uses persistent computational kernels that exploit the GPU's inverted memory hierarchy to reuse network weights over multiple timesteps. Expand
Self-Training for End-to-End Speech Recognition
TLDR
It is demonstrated that training with pseudo-labels can substantially improve the accuracy of a baseline model and is revisit self-training in the context of end-to-end speech recognition. Expand
Certified Data Removal from Machine Learning Models
TLDR
This work develops a certified-removal mechanism for linear classifiers and empirically study learning settings in which this mechanism is practical. Expand
Sequence Modeling with CTC
Building DNN acoustic models for large vocabulary speech recognition
TLDR
An empirical investigation on which aspects of DNN acoustic model design are most important for speech recognition system performance, and suggests that a relatively simple DNN architecture and optimization technique produces strong results. Expand
...
1
2
3
4
5
...