Continuous space language models

@article{Schwenk2007ContinuousSL,
  title={Continuous space language models},
  author={Holger Schwenk},
  journal={Comput. Speech Lang.},
  year={2007},
  volume={21},
  pages={492-518}
}
  • Holger Schwenk
  • Published 1 July 2007
  • Computer Science
  • Comput. Speech Lang.
Two Efficient Lattice Rescoring Methods Using Recurrent Neural Network Language Models
TLDR
Two efficient lattice rescoring methods for RNNLMs are proposed and produced 1-best performance comparable to a 10 k-best rescoring baseline RNNLM system on two large vocabulary conversational telephone speech recognition tasks for US English and Mandarin Chinese.
Exploiting Future Word Contexts in Neural Network Language Models for Speech Recognition
TLDR
A novel neural network language model structure, the succeeding-word RNNLM, su-RNNLM is proposed, which is more efficient in training than bi-directional models and can be applied to lattice rescoring.
Structured Output Layer Neural Network Language Models for Speech Recognition
TLDR
A novel neural network language model (NNLM) which relies on word clustering to structure the output vocabulary: Structured OUtput Layer (SOUL) NNLM is extended, able to handle arbitrarily-sized vocabularies, hence dispensing with the need for shortlists that are commonly used in NNLMs.
Comparison of Various Neural Network Language Models in Speech Recognition
  • Lingyun Zuo, Jian Liu, Xin Wan
  • Computer Science
    2016 3rd International Conference on Information Science and Control Engineering (ICISCE)
  • 2016
TLDR
This paper compares count models to feed forward, recurrent, and LSTM neural network in conversational telephone speech recognition tasks, and puts forward a language model estimation method that introduced the information of history sentences.
Language models for automatic speech recognition : construction and complexity control
TLDR
Experiments on Finnish and English text corpora show that the proposed pruning method gives considerable improvements over the previous pruning algorithms for Kneser-Ney smoothed models and also is better than entropy pruned GoodTuring smoothed model.
Deep Neural Network Language Models for Low Resource Languages
TLDR
The neural network language models are thoroughly evaluated on test corpora from the IARPA Babel program and compared to state-of-the-art n-gram backoff language models trained with Kneser–Ney smoothing.
Scalable Recurrent Neural Network Language Models for Speech Recognition
TLDR
This thesis aims to further explore recurrent neural network in the application of automatic speech recognition from the aspect of language models and investigates the integration of metadata in RNNLM for ASR.
Continuous space models with neural networks in natural language processing. (Modèles neuronaux pour la modélisation statistique de la langue)
  • H. Le
  • Computer Science
  • 2012
TLDR
The first contribution of this dissertation is the definition of a neural architecture based on a tree representation of the output vocabulary, namely Structured OUtput Layer (SOUL), which makes them well suited for large scale frameworks.
Training Continuous Space Language Models: Some Practical Issues
TLDR
This work studies the performance and behavior of two neural statistical language models so as to highlight some important caveats of the classical training algorithms, and introduces a new initialization scheme and new training techniques to greatly reduce the training time and to significantly improve performance.
From Feedforward to Recurrent LSTM Neural Networks for Language Modeling
TLDR
This paper compares count models to feedforward, recurrent, and long short-term memory (LSTM) neural network variants on two large-vocabulary speech recognition tasks, and analyzes the potential improvements that can be obtained when applying advanced algorithms to the rescoring of word lattices on large-scale setups.
...
...

References

SHOWING 1-10 OF 75 REFERENCES
Connectionist language modeling for large vocabulary continuous speech recognition
  • Holger Schwenk, J. Gauvain
  • Computer Science
    2002 IEEE International Conference on Acoustics, Speech, and Signal Processing
  • 2002
TLDR
The connectionist language model is being evaluated on the DARPA HUB5 conversational telephone speech recognition task and preliminary results show consistent improvements in both perplexity and word error rate.
Efficient training of large neural networks for language modeling
  • H. Schwenk
  • Computer Science
    2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
  • 2004
TLDR
The described approach achieves significant word error reductions with respect to a carefully tuned 4-gram backoff language model in a state of the art conversational speech recognizer for the DARPA rich transcriptions evaluations.
Training Neural Network Language Models on Very Large Corpora
TLDR
New algorithms to train a neural network language model on very large text corpora are presented, making possible the use of the approach in domains where several hundreds of millions words of texts are available.
Neural network language models for conversational speech recognition
TLDR
The generalization behavior of the neural network LM for in-domain training corpora varying from 7M to over 21M words is analyzed and significant word error reductions were observed compared to a carefully tuned 4-gram backoff language model in a state of the art conversational speech recognizer for the NIST rich transcription evaluations.
Building continuous space language models for transcribing european languages
TLDR
The recognition of French Broadcast News and English and Spanish parliament speeches is addressed, tasks for which less resources are available, and a neural network language model is applied that takes better advantage of the limited amount of training data.
Hierarchical Probabilistic Neural Network Language Model
TLDR
A hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition, constrained by the prior knowledge extracted from the WordNet semantic hierarchy is introduced.
USING CONTINUOUS SPACE LANGUAGE MODELS FOR CONVERSATIONAL SPEECH RECOGNITION
TLDR
This paper describes a new approach that performs the estimation of the language model probabilities in a continuous space, allowing by these means smooth interpolation of unobserved n-grams.
A Cache-Based Natural Language Model for Speech Recognition
  • R. Kuhn, R. Mori
  • Computer Science
    IEEE Trans. Pattern Anal. Mach. Intell.
  • 1990
TLDR
A novel kind of language model which reflects short-term patterns of word use by means of a cache component (analogous to cache memory in hardware terminology) is presented and contains a 3g-gram component of the traditional type.
Conversational telephone speech recognition
This paper describes the development of a speech recognition system for the processing of telephone conversations, starting with a state-of-the-art broadcast news transcription system. We identify
The use of a linguistically motivated language model in conversational speech recognition
TLDR
A linguistically motivated and computationally efficient almost-parsing language model is developed, using a data structure derived from constraint dependency grammar parsing, that tightly integrates knowledge of words, lexical features, and syntactic constraints in all stages of a complex, multi-pass conversational telephone speech recognition system.
...
...