• Publications
  • Influence
AFET: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding
TLDR
This paper proposes a novel embedding method to separately model “clean” and “noisy” mentions, and incorporates the given type hierarchy to induce loss functions. Expand
  • 71
  • 18
  • PDF
Tracking State Changes in Procedural Text: a Challenge Dataset and Models for Process Paragraph Comprehension
TLDR
We present a new dataset and models for comprehending paragraphs about a changing world along with a full annotation of entity states (location and existence) during those changes (81k datapoints). Expand
  • 45
  • 13
  • PDF
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning
TLDR
We introduce Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. Expand
  • 54
  • 11
  • PDF
Bridge Text and Knowledge by Learning Multi-Prototype Entity Mention Embedding
TLDR
We propose a novel Multi-Prototype Mention Embedding model, which learns multiple sense embeddings for each mention by jointly modeling words from textual contexts and entities derived from a knowledge base. Expand
  • 60
  • 5
  • PDF
Liberal Event Extraction and Event Schema Induction
TLDR
We propose a brand new “Liberal” Event Extraction paradigm to extract events and discover event schemas from any input corpus simultaneously. Expand
  • 63
  • 5
  • PDF
Biomedical Event Extraction based on Knowledge-driven Tree-LSTM
TLDR
We propose a novel knowledge base (KB)-driven tree-structured long short-term memory networks (Tree-LSTM) framework, incorporating two new types of features: dependency structures to capture wide contexts; (2) entity properties (types and category descriptions) from external ontologies via entity linking. Expand
  • 16
  • 5
  • PDF
Genre Separation Network with Adversarial Training for Cross-genre Relation Extraction
TLDR
We propose a genre separation framework for cross-genre relation extraction. Expand
  • 14
  • 4
  • PDF
Improving Slot Filling Performance with Attentive Neural Networks on Dependency Structures
TLDR
We propose an effective DNN architecture for SF with the following new strategies: (1). Expand
  • 21
  • 3
  • PDF
Global Attention for Name Tagging
TLDR
We present a new framework to improve name tagging by utilizing local, document-level, and corpus-level contextual information. Expand
  • 8
  • 3
  • PDF
Describing a Knowledge Base
TLDR
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB). We build our generation framework based on a pointer network which can copy facts from the input KB, and add two attention mechanisms: (i) slot-aware attention to capture the association between a slot type and its corresponding slot value; and (ii) a new \emph{table position self-attention} to capture inter-dependencies among related slots. Expand
  • 15
  • 3
  • PDF