• Publications
  • Influence
Text Summarization with Pretrained Encoders
TLDR
This paper introduces a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences and proposes a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two.
Fine-tune BERT for Extractive Summarization
  • Yang Liu
  • Computer Science
    ArXiv
  • 25 March 2019
TLDR
BERTSUM, a simple variant of BERT, for extractive summarization, is described, which is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L.
A Dependency-Based Neural Network for Relation Classification
TLDR
A new structure, termed augmented dependency path (ADP), is proposed, which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path, and a dependency-based neural networks (DepNN) are developed to exploit the semantic representation behind the ADP.
Implicit Discourse Relation Classification via Multi-Task Neural Networks
TLDR
This work designs related discourse classification tasks specific to a corpus, and proposes a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task.
Hierarchical Transformers for Multi-Document Summarization
TLDR
A neural summarization model which can effectively process multiple input documents and distill Transformer architecture with the ability to encode documents in a hierarchical manner is developed.
A Novel Neural Topic Model and Its Supervised Extension
TLDR
A novel neural topic model (NTM) where the representation of words and documents are efficiently and naturally combined into a uniform framework and is competitive in both topic discovery and classification/regression tasks.
Learning Structured Text Representations
TLDR
A model that can encode a document while automatically inducing rich structural dependencies is proposed that embeds a differentiable non-projective parsing algorithm into a neural model and uses attention mechanisms to incorporate the structural biases.
Recognizing Implicit Discourse Relations via Repeated Reading: Neural Networks with Multi-Level Attention
TLDR
This work proposes the neural networks with multi-level attention (NNMA), combining the attention mechanism and external memories to gradually fix the attention on some specific words helpful to judging the discourse relations.
Single Document Summarization as Tree Induction
TLDR
A new iterative refinement algorithm is designed that induces a multi-root dependency tree while predicting the output summary of single-document extractive summarization, and performs competitively against state-of-the-art methods.
Dependency Grammar Induction with a Neural Variational Transition-based Parser
TLDR
A neural transition-based parser for dependency grammar induction, whose inference procedure utilizes rich neural features with time complexity, and achieves performance comparable to graph-based models, both on the English Penn Treebank and on the Universal Dependency Treebank.
...
1
2
...