• Publications
  • Influence
SciBERT: A Pretrained Language Model for Scientific Text
TLDR
SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.
Longformer: The Long-Document Transformer
TLDR
Following prior work on long-sequence transformers, the Longformer is evaluated on character-level language modeling and achieves state-of-the-art results on text8 and enwik8 and pretrain Longformer and finetune it on a variety of downstream tasks.
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
TLDR
This work proposes the first model for abstractive summarization of single, longer-form documents (e.g., research papers), consisting of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary.
SciBERT: Pretrained Contextualized Embeddings for Scientific Text
TLDR
SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.
CEDR: Contextualized Embeddings for Document Ranking
TLDR
This work investigates how two pretrained contextualized language models (ELMo and BERT) can be utilized for ad-hoc document ranking and proposes a joint approach that incorporates BERT's classification vector into existing neural models and shows that it outperforms state-of-the-art ad-Hoc ranking baselines.
Structural Scaffolds for Citation Intent Classification in Scientific Publications
TLDR
This work proposes structural scaffolds, a multitask model to incorporate structural information of scientific papers into citations for effective classification of citation intents, which achieves a new state-of-the-art on an existing ACL anthology dataset with a 13.3% absolute increase in F1 score.
Depression and Self-Harm Risk Assessment in Online Forums
TLDR
This work introduces a large-scale general forum dataset consisting of users with self-reported depression diagnoses matched with control users, and proposes methods for identifying posts in support communities that may indicate a risk of self-harm, and demonstrates that this approach outperforms strong previously proposed methods.
SPECTER: Document-level Representation Learning using Citation-informed Transformers
TLDR
This work proposes SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document- level relatedness: the citation graph, and shows that Specter outperforms a variety of competitive baselines on the benchmark.
Fact or Fiction: Verifying Scientific Claims
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify
Pretrained Language Models for Sequential Sentence Classification
TLDR
This work constructs a joint sentence representation that allows BERT Transformer layers to directly utilize contextual information from all words in all sentences, and achieves state-of-the-art results on four datasets, including a new dataset of structured scientific abstracts.
...
1
2
3
4
5
...