• Publications
  • Influence
Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking
TLDR
This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. Expand
  • 45
  • 11
  • PDF
Systematic Generalization: What Is Required and Can It Be Learned?
TLDR
Numerous models for grounded language understanding are proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. Expand
  • 56
  • 9
  • PDF
Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures
TLDR
We show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts. Expand
  • 44
  • 5
  • PDF
CLOSURE: Assessing Systematic Generalization of CLEVR Models
TLDR
In this work, we study how systematic the generalization of such models is, that is to which extent they are capable of handling novel combinations of known linguistic constructs. Expand
  • 15
  • 2
  • PDF
Iterative Search for Weakly Supervised Semantic Parsing
TLDR
We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. Expand
  • 11
  • 1
  • PDF
Finer Grained Entity Typing with TypeNet
TLDR
We introduce TypeNet, a dataset of entity types consisting of over 1941 types organized in a hierarchy, obtained by manually annotating a mapping from Freebase types to WordNet. Expand
  • 10
  • 1
  • PDF
ExpBERT: Representation Engineering with Natural Language Explanations
TLDR
We use BERT fine-tuned on MultiNLI to ``interpret'' these explanations with respect to the input sentence, producing explanation-guided representations of the input. Expand
  • 7
  • PDF
Mitigating the Effect of Out-of-Vocabulary Entity Pairs in Matrix Factorization for KB Inference
TLDR
This paper analyzes the varied performance of Matrix Factorization (MF) on the related tasks of relation extraction and knowledge-base completion, which have been unified recently into a single framework ofknowledge-base inference (KBI). Expand
  • 4
  • PDF
Embedded-State Latent Conditional Random Fields for Sequence Labeling
TLDR
We present a latent-variable CRF model with a novel mechanism for learning latent constraints without overfitting, using low-rank log-potential scoring matrices, and explore the interpretable latent structure. Expand
  • 3
  • PDF
Meta-Learning for Sample Reweighting using Sequence Models: A Natural Language Inference Case Study
Natural Language Inference (NLI) is a sequence modeling task on the critical path towards natural language understanding. However, NLI datasets and models are plagued by a series of problems;Expand