• Publications
  • Influence
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on privateExpand
  • 1,070
  • 335
Reasoning With Neural Tensor Networks for Knowledge Base Completion
Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. InExpand
  • 1,144
  • 223
Reading Wikipedia to Answer Open-Domain Questions
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task ofExpand
  • 655
  • 175
A Fast and Accurate Dependency Parser using Neural Networks
Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsingExpand
  • 1,346
  • 121
CoQA: A Conversational Question Answering Challenge
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable themExpand
  • 264
  • 70
Representing Text for Joint Embedding of Text and Knowledge Bases
Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracyExpand
  • 302
  • 62
Observed versus latent features for knowledge base and text inference
In this paper we show the surprising effectiveness of a simple observed features model in comparison to latent feature models on two benchmark knowledge base completion datasets, FB15K and WN18. WeExpand
  • 213
  • 60
A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is theExpand
  • 394
  • 57
Position-aware Attention and Supervised Data Improve Slot Filling
Organized relational knowledge in the form of “knowledge graphs” is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documentsExpand
  • 136
  • 36
SpanBERT: Improving Pre-training by Representing and Predicting Spans
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens,Expand
  • 159
  • 29