Learning principled bilingual mappings of word embeddings while preserving monolingual invariance

Abstract

Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.

2 Figures and Tables

Cite this paper

@inproceedings{Artetxe2016LearningPB, title={Learning principled bilingual mappings of word embeddings while preserving monolingual invariance}, author={Mikel Artetxe and Gorka Labaka and Eneko Agirre}, booktitle={EMNLP}, year={2016} }