• Corpus ID: 234094591

Hone as You Read: A Practical Type of Interactive Summarization

@article{Bohn2021HoneAY,
  title={Hone as You Read: A Practical Type of Interactive Summarization},
  author={Tanner A. Bohn and Charles X. Ling},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.02923}
}
We present HARE, a new task where reader feedback is used to optimize document summaries for personal interest during the normal flow of reading. This task is related to interactive summarization, where personalized summaries are produced following a long feedback stage where users may read the same sentences many times. However, this process severely interrupts the flow of reading, making it impractical for leisurely reading. We propose to gather minimally-invasive feedback during the reading… 
1 Citations

Figures and Tables from this paper

Improving Reader Motivation with Machine Learning
This thesis focuses on the problem of increasing reading motivation with machine learning (ML). The act of reading is central to modern human life, and there is much to be gained by improving the

References

SHOWING 1-10 OF 43 REFERENCES
Summarize What You Are Interested In: An Optimization Framework for Interactive Personalized Summarization
TLDR
This work investigates an important and challenging problem in summary generation, i.e., Interactive Personalized Summarization (IPS), which generates summaries in an interactive and personalized manner and develops experimental systems to compare 5 rival algorithms on 4 different datasets.
Sherlock: A System for Interactive Summarization of Large Text Collections
TLDR
A new approximate summarization model is integrated into Sherlock that can guarantee interactive speeds even for large text collections to keep the user engaged in the process.
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
TLDR
This work proposes SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques.
Query-Based Abstractive Summarization Using Neural Networks
TLDR
It is shown that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.
Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization
TLDR
A novel abstractive model is proposed which is conditioned on the article’s topics and based entirely on convolutional neural networks, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans.
Overview of DUC 2005
The focus of DUC 2005 was on developing new evaluation methods that take into account variation in content in human-authored summaries. Therefore, DUC 2005 had a single user-oriented,
The Feasibility of Embedding Based Automatic Evaluation for Single Document Summarization
TLDR
The experimental results show that the max value over each dimension of the summary ELMo word embeddings is a good representation that results in high correlation with human ratings, and averaging the cosine similarity of all encoders the authors tested yieldsHigh correlation with manual scores in reference-free setting.
Automatically Evaluating Content Selection in Summarization without Human Models
TLDR
This work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other, and ranks participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness.
Query Focused Multi-Document Summarization with Distant Supervision
TLDR
This work proposes a coarse-to-fine modeling framework which introduces separate modules for estimating whether segments are relevant to the query, likely to contain an answer, and central and demonstrates that this framework outperforms strong comparison systems on standard QFS benchmarks.
Overview of DUC 2006
TLDR
The DUC 2006 summarization task was to synthesize from a set of 25 documents a wellorganized, answer to a complex question, and the overall responsiveness metric showed that readability plays an important role in the perceived quality of the summaries.
...
1
2
3
4
5
...