Multi-Task Learning for Multiple Language Translation

Abstract

In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.

Extracted Key Phrases

11 Figures and Tables

Showing 1-10 of 42 extracted citations
02040201520162017
Citations per Year

58 Citations

Semantic Scholar estimates that this publication has received between 42 and 91 citations based on the available data.

See our FAQ for additional information.