• Corpus ID: 237355013

Topic-Guided Abstractive Text Summarization: a Joint Learning Approach

@inproceedings{Zheng2020TopicGuidedAT,
  title={Topic-Guided Abstractive Text Summarization: a Joint Learning Approach},
  author={Chujie Zheng and Kunpeng Zhang and Harry J. Wang and Ling Fan and Zhe Wang},
  year={2020}
}
We introduce a new approach for abstractive text summarization, Topic-Guided Abstractive Summarization, which calibrates long-range dependencies from topic-level features with globally salient content. The idea is to incorporate neural topic modeling with a Transformerbased sequence-to-sequence (seq2seq) model in a joint learning framework. This design can learn and preserve the global semantics of the document, which can provide additional contextual guidance for capturing important ideas of… 
1 Citations
Does Structure Matter? Leveraging Data-to-Text Generation for Answering Complex Information Needs
TLDR
This work proposes the use of a content selection and planning pipeline which aims at structuring the answer by generating intermediate plans and shows the effectiveness of planning-based models in comparison to a text-to-text model.

References

SHOWING 1-10 OF 45 REFERENCES
Topic Augmented Generator for Abstractive Summarization
TLDR
This paper proposes a new decoder where the output summary is generated by conditioning on both the input text and the latent topics of the document, and achieves strongly improved ROUGE scores when compared to state-of-the-art models.
A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization
TLDR
A deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization, which demonstrates the superiority of the proposed method in the abstractive summarization.
Friendly Topic Assistant for Transformer Based Abstractive Summarization
TLDR
A topic assistant (TA) including three modules is proposed that is compatible with various Transformer-based models and user-friendly since i) TA is a plug-and-play model that does not break any structure of the original Transformer network, making users easily fine-tune Transformer+TA based on a well pre-trained model.
Document Summarization with VHTM: Variational Hierarchical Topic-Aware Mechanism
TLDR
This work proposes a variational hierarchical model to holistically address topic embedding and attention in automatic text summarization and is the first attempt to jointly accomplish summarization with topic inference via variational encoder-decoder and merge topics into multi-grained levels.
Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization
TLDR
A novel abstractive model is proposed which is conditioned on the article’s topics and based entirely on convolutional neural networks, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans.
Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward
TLDR
ASGARD is presented, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD, and proposes the use of dual encoders—a sequential document encoder and a graph-structured encoder—to maintain the global context and local characteristics of entities, complementing each other.
Text Summarization with Pretrained Encoders
TLDR
This paper introduces a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences and proposes a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two.
Self-Attention Guided Copy Mechanism for Abstractive Summarization
TLDR
A Transformer-based model is proposed to enhance the copy mechanism by identifying the importance of each source word based on the degree centrality with a directed graph built by the self-attention layer in the Transformer.
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.
Extractive Summarization as Text Matching
TLDR
This paper forms the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be matched in a semantic space to create a semantic matching framework.
...
1
2
3
4
5
...