• Publications
  • Influence
Entity Linking via Joint Encoding of Types, Descriptions, and Context
TLDR
We present a neural, modular entity linking system that learns a unified dense representation for each entity using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types. Expand
  • 98
  • 10
  • PDF
Joint Multilingual Supervision for Cross-lingual Entity Linking
TLDR
We propose XELMS (XEL with Multilingual Supervision) (§2), the first XEL approach that combines supervision from multiple languages jointly. Expand
  • 21
  • 5
  • PDF
Neural Module Networks for Reasoning over Text
TLDR
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Expand
  • 22
  • 2
  • PDF
Evaluating NLP Models via Contrast Sets
TLDR
We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. Expand
  • 42
  • 1
  • PDF
Collectively Embedding Multi-Relational Data for Predicting User Preferences
TLDR
In this paper, we present a generic approach to factorization of relational data that collectively models all the relations in the database. Expand
  • 5
  • 1
  • PDF
Neural Compositional Denotational Semantics for Question Answering
TLDR
We introduce an end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics. Expand
  • 10
  • PDF
Revisiting the Evaluation for Cross Document Event Coreference
TLDR
We revisit the evaluation for CDEC, and discover that past works have adopted different, often inconsistent, evaluation settings, which either overlook certain mistakes in coreference decisions, or make assumptions that simplify the coreference task considerably. Expand
  • 6
  • PDF
Robust Named Entity Recognition with Truecasing Pretraining
TLDR
We address the problem of robustness of NER systems in data with noisy or uncertain casing, using a pretraining objective that predicts casing in text, or a truecaser, leveraging unlabeled data. Expand
  • 4
  • PDF
Obtaining Faithful Interpretations from Compositional Neural Networks
TLDR
We introduce the concept of module-wise faithfulness, a systematic evaluation of faithfulness in neural module networks (NMNs) for visual and textual reasoning. Expand
  • 4
  • PDF
Evaluating Models' Local Decision Boundaries via Contrast Sets
TLDR
We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. Expand
  • 4
  • PDF