• Publications
  • Influence
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or noExpand
  • 7,618
  • 876
  • PDF
Bidirectional recurrent neural networks
In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using inputExpand
  • 3,424
  • 536
  • PDF
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems.Expand
  • 2,927
  • 235
  • PDF
Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standardExpand
  • 848
  • 107
  • PDF
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two keyExpand
  • 753
  • 89
  • PDF
Statistical parametric speech synthesis using deep neural networks
Conventional approaches to statistical parametric speech synthesis typically use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speechExpand
  • 672
  • 79
  • PDF
Reward Augmented Maximum Likelihood for Neural Structured Prediction
A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approachExpand
  • 131
  • 25
  • PDF
Deep Learning for Acoustic Modeling in Parametric Speech Generation: A systematic review of existing techniques and future trends
Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) are the two most common types of acoustic models used in statistical parametric approaches for generating low-level speech waveformsExpand
  • 190
  • 11
  • PDF
Japanese and Korean voice search
This paper describes challenges and solutions for building a successful voice search system as applied to Japanese and Korean at Google. We describe the techniques used to deal with an infiniteExpand
  • 211
  • 9
Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models. Lingvo models are composed ofExpand
  • 69
  • 8
  • PDF