Multi-Task Learning for Multiple Language Translation


In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.

Extracted Key Phrases

11 Figures and Tables

Citations per Year

60 Citations

Semantic Scholar estimates that this publication has 60 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Dong2015MultiTaskLF, title={Multi-Task Learning for Multiple Language Translation}, author={Daxiang Dong and Hua Wu and Wei He and Dianhai Yu and Haifeng Wang}, booktitle={ACL}, year={2015} }