• Publications
  • Influence
Are Pretrained Language Models Symbolic Reasoners over Knowledge?
TLDR
This is the first study that investigates the causal relation between facts present in training and facts learned by the PLM, and shows that PLMs seem to learn to apply some symbolic reasoning rules correctly but struggle with others, including two-hop reasoning. Expand
ContraCAT: Contrastive Coreference Analytical Templates for Machine Translation
TLDR
This work creates a new template test set ContraCAT, designed to individually assess the ability to handle the specific steps necessary for successful pronoun translation, and shows that current approaches to context-aware NMT rely on a set of surface heuristics, which break down when translations require real reasoning. Expand