• Publications
  • Influence
On the Limitations of Unsupervised Bilingual Dictionary Induction
TLDR
It is shown that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction and establishes a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.
Monolingual and Cross-Lingual Information Retrieval Models Based on (Bilingual) Word Embeddings
TLDR
A novel word representation learning model called Bilingual Word Embeddings Skip-Gram (BWESG) is presented which is the first model able to learn bilingual word embeddings solely on the basis of document-aligned comparable data.
SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity
TLDR
SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs, is introduced, hoping that it will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.
Semantic Specialization of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints
TLDR
The evaluation shows that the Attract-Repel method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones.
A Survey of Cross-lingual Word Embedding Models
TLDR
A comprehensive typology of cross-lingual word embedding models is provided, showing that many of the models presented in the literature optimize for the same objectives, and that seemingly different models are often equivalent modulo optimization strategies, hyper-parameters, and such.
MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer
TLDR
MAD-X is proposed, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations and introduces a novel invertible adapter architecture and a strong baseline method for adapting a pretrained multilingual model to a new language.
AdapterHub: A Framework for Adapting Transformers
TLDR
AdaptersHub is proposed, a framework that allows dynamic “stiching-in” of pre-trained adapters for different tasks and languages that enables scalable and easy access to sharing of task-specific models, particularly in low-resource scenarios.
Skip N-grams and Ranking Functions for Predicting Script Events
TLDR
This work aims to answer key questions about how best to identify representative event chains from a source text, gather statistics from the event chains, and choose ranking functions for predicting new script events.
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment
We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy–hypernymy or lexical
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions
TLDR
It is empirically demonstrate that the performance of CLE models largely depends on the task at hand and that optimizing CLE models for BLI may hurt downstream performance, and indicates the most robust supervised and unsupervised CLE models.
...
1
2
3
4
5
...