Pinched lattice minimum Bayes risk discriminative training for large vocabulary continuous speech recognition

@inproceedings{Doumpiotis2004PinchedLM,
  title={Pinched lattice minimum Bayes risk discriminative training for large vocabulary continuous speech recognition},
  author={Vlasios Doumpiotis and William J. Byrne},
  booktitle={INTERSPEECH},
  year={2004}
}
Iterative estimation procedures that minimize empirical risk based on general loss functions such as the Levenshtein distance have been derived as extensions of the Extended Baum Welch algorithm. While reducing expected loss on training data is a desirable training criterion, the se algorithms can be difficult to apply. They are unlike MMI estimation in that they require an explicit listing of the hy potheses to be considered and in complex problems such lists tend to be prohibitively large. To… CONTINUE READING

References

Publications referenced by this paper.
Showing 1-9 of 9 references

Hidden Markov Models, Maximum Mutual Information, and the Speech Recognition Problem

  • Y. Normandin
  • Ph.D. thesis, McGill University,
  • 1991
Highly Influential
14 Excerpts

Automatic recognition of spontaneous speech for access to multilingual oral history archives

  • S. Tsakalidis, W. Byrne
  • IEEE Trans . Speech and Audio Proc
  • 2004

Automatic recognition of spontaneous speech for access to multilingual oral history archives,”IEEE

  • W. Byrne
  • Trans. Speech and Audio Proc.,
  • 2004
1 Excerpt

Similar Papers

Loading similar papers…