• Publications
  • Influence
Text Summarization with Pretrained Encoders
TLDR
This paper introduces a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences and proposes a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two.
Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization
TLDR
A novel abstractive model is proposed which is conditioned on the article’s topics and based entirely on convolutional neural networks, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans.
Modeling Local Coherence: An Entity-Based Approach
TLDR
This article re-conceptualize coherence assessment as a learning task and shows that the proposed entity-grid representation of discourse is well-suited for ranking-based generation and text classification tasks.
Composition in Distributional Models of Semantics
TLDR
This article proposes a framework for representing the meaning of word combinations in vector space in terms of additive and multiplicative functions, and introduces a wide range of composition models that are evaluated empirically on a phrase similarity task.
Vector-based Models of Semantic Composition
TLDR
Under this framework, a wide range of composition models are introduced which are evaluated empirically on a sentence similarity task and demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.
Neural Summarization by Extracting Sentences and Words
TLDR
This work develops a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor that allows for different classes of summarization models which can extract sentences or words.
Language to Logical Form with Neural Attention
TLDR
This paper presents a general method based on an attention-enhanced encoder-decoder model that encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors.
Ranking Sentences for Extractive Summarization with Reinforcement Learning
TLDR
This paper conceptualize extractive summarization as a sentence ranking task and proposes a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.
Long Short-Term Memory-Networks for Machine Reading
TLDR
A machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention and extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell, offering a way to weakly induce relations among tokens.
Dependency-Based Construction of Semantic Space Models
TLDR
This article presents a novel framework for constructing semantic spaces that takes syntactic relations into account, and introduces a formalization for this class of models, which allows linguistic knowledge to guide the construction process.
...
1
2
3
4
5
...