• Corpus ID: 235265924

Text Summarization with Latent Queries

@article{Xu2021TextSW,
  title={Text Summarization with Latent Queries},
  author={Yumo Xu and Mirella Lapata},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.00104}
}
The availability of large-scale datasets has driven the development of neural models that create summaries from single documents, for generic purposes. When using a summarization system, users often have specific intents with various language realizations, which, depending on the information need, can range from a single keyword to a long narrative composed of multiple questions. Existing summarization systems, however, often either fail to support or act robustly on this query focused… 

Figures and Tables from this paper

Domain Adaptation with Pre-trained Transformers for Query-Focused Abstractive Text Summarization

This article applies a variety of techniques using pre-trained transformer-based summarization models including transfer learning, weakly supervised learning, and distant supervision to generate abstractive summaries for the Query-Focused Text Summarization task.

Efficient Few-Shot Fine-Tuning for Opinion Summarization

This work utilizes an efficient few-shot method based on adapters which can easily store in-domain knowledge and improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets, respectively.

Controlled Text Reduction

This paper formalizes Controlled Text Reduction as a standalone task, whose input is a source text with marked spans of targeted content ("highlighting"), and advocates the potential of such models, both for modular fully-automatic summarization, as well as for semi-automated human-in-the-loop use cases.

Exploring Neural Models for Query-Focused Summarization

It is seen that increasing the input segment length used in training and inference for R EL R EG improves at 256 tokens but decreases at 512 tokens, suggesting that a balance is found between including additional context for ranking versus enabling a greater number of shorter seg- ments that may more diverse content.

Query-Focused Extractive Summarisation for Finding Ideal Answers to Biomedical and COVID-19 Questions

Macquarie University's participation to the BioASQ Synergy Task, andBioASQ9b Phase B shows that using BERT in a classification setup is a very strong baseline for the identification of ideal answers.

HTKG: Deep Keyphrase Generation with Neural Hierarchical Topic Guidance

This paper proposes a novel hierarchical topic-guided variational neural sequence generation method for keyphrase generation, which consists of two major modules: a neural hierarchical topic model that learns the latent topic tree across the whole corpus of documents, and a Variational neural key phrase generation model to generate keyphrases under hierarchical topic guidance.

References

SHOWING 1-10 OF 48 REFERENCES

Coarse-to-Fine Query Focused Multi-Document Summarization

This work proposes a coarse-to-fine modeling framework which employs progressively more accurate modules for estimating whether text segments are relevant, likely to contain an answer, and central and presents an instantiation of this framework with a trained evidence estimator.

Transforming Wikipedia Into Augmented Data for Query-Focused Summarization

This paper uses Wikipedia to automatically collect a large query-focused summarization dataset (named WikiRef) of more than 280,000 examples, which can serve as a means of data augmentation and develops a BERT-based query- focused summarization model (Q-BERT) to extract sentences from the documents as summaries.

Towards Generating Query to Perform Query Focused Abstractive Summarization using Pre-trained Model

A query generation approach, where most similar words between documents and summaries for generating queries are considered, which outperforms the state-of-the-art models on three ROUGE scores.

Diversity driven attention model for query-based abstractive summarization

This work proposes a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions: a query attention model which learns to focus on different portions of the query at different time steps and a new diversity based Attention model which aims to alleviate the problem of repeating phrases in the summary.

WSL-DS: Weakly Supervised Learning with Distant Supervision for Query Focused Multi-Document Abstractive Summarization

This paper uses datasets similar to the target dataset as the training data where it leverage pre-trained sentence similarity models to generate the weak reference summary of each individual document in a document set from the multi-document gold reference summaries.

Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq Models

The method (Relevance Sensitive Attention for QFS) is compared to extractive baselines and with various ways to combine abstractive models on the DUC QFS datasets and with solid improvements on ROUGE performance.

Topic Concentration in Query Focused Summarization Datasets

This work defines a task-based method to quantify topic concentration in datasets, i.e., the ratio of sentences within the dataset that are relevant to the query, and observes that the DUC 2005, 2006 and 2007 datasets suffer from very high topic concentration.

Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization

An initial investigation into a novel adaptation method that exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an Abstractive summary.

Improving Query Focused Summarization Using Look-Ahead Strategy

This paper proposes look-ahead version of topic-sensitive LexRank that assumes that random surfer not only knows the query relevance of the sentence to where he jumps but he can also look N-step ahead from that sentence to find query relevance scores of future set of sentences.

Hierarchical Transformers for Multi-Document Summarization

A neural summarization model which can effectively process multiple input documents and distill Transformer architecture with the ability to encode documents in a hierarchical manner is developed.