• Publications
  • Influence
Deep contextualized word representations
TLDR
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals. Expand
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
TLDR
It is found that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT. Expand
Simple and Effective Multi-Paragraph Reading Comprehension
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce wellExpand
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
TLDR
This paper trains a naive model that makes predictions exclusively based on dataset biases, and a robust model as part of an ensemble with the naive one in order to encourage it to focus on other patterns in the data that are more likely to generalize. Expand
PDFFigures 2.0: Mining figures from research papers
TLDR
An algorithm that extracts figures, tables, and captions from documents called “PDFFigures 2.0” that analyzes the structure of individual pages by detecting captions, graphical elements, and chunks of body text, and then locates figures and tables by reasoning about the empty regions within that text. Expand
Training Deep Convolutional Neural Networks to Play Go
TLDR
The convolutional neural networks trained in this work can consistently defeat the well known Go program GNU Go and win some games against state of the art Go playing program Fuego while using a fraction of the play time. Expand
Looking Beyond Text: Extracting Figures, Tables and Captions from Computer Science Papers
TLDR
This work introduces a new dataset of 150 computer science papers along with ground truth labels for the locations of the figures, tables and captions within them and demonstrates a caption-to-figure matching component that is effective even in cases where individual captions are adjacent to multiple figures. Expand
IKE - An Interactive Tool for Knowledge Extraction
TLDR
IKE is a new extraction tool that performs fast, interactive bootstrapping to develop high-quality extraction patterns for targeted relations and is the first interactive extraction tool to seamlessly integrate symbolic and distributional methods for search. Expand
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles
TLDR
This paper proposes a method that can automatically detect and ignore dataset-specific patterns that are likely to reflect dataset bias, and trains a lower capacity model in an ensemble with a higher capacity model. Expand