• Corpus ID: 17663521

RECOGNITION WITH WEIGHTED FINITE-STATE TRANSDUCERS

@inproceedings{Mohri2006RECOGNITIONWW,
  title={RECOGNITION WITH WEIGHTED FINITE-STATE TRANSDUCERS},
  author={Mehryar Mohri and Michael Riley},
  year={2006}
}
This chapter describes a general representation and algorithmic framework for speech recognition based on weighted finite-state transducers. These transducers provide a common and natural representation for major components of speech recognition systems, including hidden Markov models (HMMs), context-dependency models, pronunciation dictionaries, statistical grammars, and word or phone lattices. General algorithms for building and optimizing transducer models are presented, including… 

References

SHOWING 1-10 OF 45 REFERENCES

A weight pushing algorithm for large vocabulary speech recognition

TLDR
A weight pushing algorithm is presented that modifies the weights of a given weighted transducer in a way such that the transition probabilities form a stochastic distribution, which results in an equivalent transducers whose weight distribution is more suitable for pruning and speech recognition.

A generalized construction of integrated speech recognition transducers

TLDR
This work generalizes the prior construction of the integrated speech recognition transducer to work with an arbitrary number of component transducers and, to a large extent, releases the constraints imposed on the type of input transducers by providing more general solutions to these problems.

Integrated context-dependent networks in very large vocabulary speech recognition

TLDR
It is shown that an efficient recognition network including context- dependent and HMM models can be built using weighted determinization of transducers and the size of the integrated context-dependent networks constructed can be dramatically reduced using a factoring algorithm.

Network optimizations for large-vocabulary speech recognition

Finite-State Transducers in Language and Speech Processing

  • M. Mohri
  • Computer Science
    Comput. Linguistics
  • 1997
TLDR
This work recalls classical theorems and gives new ones characterizing sequential string-to-string transducers, including algorithms for determinizing and minizizing these transducers very efficiently, and characterizations of the transducers admitting determinization and the corresponding algorithms.

Dynamic Compilation of Weighted Context-Free Grammars

TLDR
An efficient algorithm for compiling into weighted finite automata an interesting class of weighted context-free grammars that represent regular languages that can be combined with other speech recognition components are described.

Statistical Modeling for Unit Selection in Speech Synthesis

TLDR
A general statistical modeling framework for unit selection inspired by automatic speech recognition is introduced and techniques based on that framework can result in a more accurate unit selection, thereby improving the general quality of a speech synthesizer.

Full expansion of context-dependent networks in large vocabulary speech recognition

We combine our earlier approach to context-dependent network representation with our algorithm for determining weighted networks to build optimized networks for large-vocabulary speech recognition

Finite-State Approximation of Phrase Structure Grammars

TLDR
An algorithm is described that computes finite-state approximations for context-free grammars and equivalent augmented phrase-structure grammar formalisms, and the approximation is exact for certain context- Free Grammars generating regular languages, including all left-linear and right-linear context- free grammARS.

Stochastic pronunciation modelling from hand-labelled phonetic corpora