Moses: Open Source Toolkit for Statistical Machine Translation

Abstract

We describe an open-source toolkit for statistical machine translation whose novel contributions are (a) support for linguistically motivated factors, (b) confusion network decoding, and (c) efficient data formats for translation models and language models. In addition to the SMT decoder, the toolkit also includes a wide variety of tools for training, tuning and applying the system to many translation tasks. 1 Motivation Phrase-based statistical machine translation (Koehn et al. 2003) has emerged as the dominant paradigm in machine translation research. However , until now, most work in this field has been carried out on proprietary and in-house research systems. This lack of openness has created a high barrier to entry for researchers as many of the components required have had to be duplicated. This has also hindered effective comparisons of the different elements of the systems. By providing a free and complete toolkit, we hope that this will stimulate the development of the field. For this system to be adopted by the community , it must demonstrate performance that is comparable to the best available systems. Moses has shown that it achieves results comparable to the most competitive and widely used statistical machine translation systems in translation quality and run-time (Shen et al. 2006). It features all the capabilities of the closed sourced Pharaoh decoder (Koehn 2004). Apart from providing an open-source toolkit for SMT, a further motivation for Moses is to extend phrase-based translation with factors and confusion network decoding. The current phrase-based approach to statistical machine translation is limited to the mapping of small text chunks without any explicit use of linguistic information, be it morphological, syntactic, or semantic. These additional sources of information have been shown to be valuable when integrated into pre-processing or post-processing steps. Moses also integrates confusion network decoding , which allows the translation of ambiguous input. This enables, for instance, the tighter integration of speech recognition and machine translation. Instead of passing along the one-best output of the recognizer, a network of different word choices may be examined by the machine translation system. Efficient data structures in Moses for the memory-intensive translation model and language model allow the exploitation of much larger data resources with limited hardware.

Extracted Key Phrases

3 Figures and Tables

Showing 1-10 of 13 references

Factored Language Model for Statistical Machine Translation

  • Amittai Axelrod
  • 2006
1 Excerpt

A New Decoder for Spoken Language Translation Based on Confusion Networks

  • Nicola Bertoldi, Marcello Federico
  • 2005
1 Excerpt
Showing 1-10 of 2,754 extracted citations
020040020072008200920102011201220132014201520162017
Citations per Year

3,919 Citations

Semantic Scholar estimates that this publication has received between 3,716 and 4,138 citations based on the available data.

See our FAQ for additional information.