2016 BUT Babel system: Multilingual BLSTM acoustic model with i-vector based adaptation

Abstract

The paper provides an analysis of BUT automatic speech recognition systems (ASR) built for the 2016 IARPA Babel evaluation. The IARPA Babel program concentrates on building ASR system for many low resource languages, where only a limited amount of transcribed speech is available for each language. In such scenario, we found essential to train the ASR systems in a multilingual fashion. In this work, we report superior results obtained with pre-trained multilingual BLSTM acoustic models, where we used multi-task training with separate classification layer for each language. The results reported on three Babel Year 4 languages show over 3% absolute WER reductions obtained from such multilingual pre-training. Experiments with different input features show that the multilingual BLSTM performs the best with simple log-Mel-filter-bank outputs, which makes our previously successful multilingual stack bottleneck features with CMLLR adaptation obsolete. Finally, we experiment with different configurations of i-vector based speaker adaptation in the monoand multi-lingual BLSTM architectures. This results in additional WER reductions over 1% absolute.

7 Figures and Tables

Cite this paper

@inproceedings{Karafit20172016BB, title={2016 BUT Babel system: Multilingual BLSTM acoustic model with i-vector based adaptation}, author={Martin Karafi{\'a}t and Murali Karthick Baskar and Pavel Matejka and Karel Vesel{\'y} and Frantisek Gr{\'e}zl and Luk{\'a}s Burget and Jan Cernock{\'y}}, year={2017} }