Probabilistic lexical modeling and unsupervised training for zero-resourced ASR

@article{Rasipuram2013ProbabilisticLM,
  title={Probabilistic lexical modeling and unsupervised training for zero-resourced ASR},
  author={Ramya Rasipuram and Marzieh Razavi and Mathew Magimai-Doss},
  journal={2013 IEEE Workshop on Automatic Speech Recognition and Understanding},
  year={2013},
  pages={446-451}
}
Standard automatic speech recognition (ASR) systems rely on transcribed speech, language models, and pronunciation dictionaries to achieve state-of-the-art performance. The unavailability of these resources constrains the ASR technology to be available for many languages. In this paper, we propose a novel zero-resourced ASR approach to train acoustic models that only uses list of probable words from the language of interest. The proposed approach is based on Kullback-Leibler divergence based… CONTINUE READING

Citations

Publications citing this paper.

References

Publications referenced by this paper.
Showing 1-10 of 13 references

Similar Papers

Loading similar papers…