NSTM: Real-Time Query-Driven News Overview Composition at Bloomberg

@article{Bambrick2020NSTMRQ,
  title={NSTM: Real-Time Query-Driven News Overview Composition at Bloomberg},
  author={Joshua Bambrick and Minjie Xu and Andy Almonte and Igor Malioutov and Guim Perarnau and Vittorio Selo and Iat Chong Chan},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.01117}
}
Millions of news articles from hundreds of thousands of sources around the globe appear in news aggregators every day. Consuming such a volume of news presents an almost insurmountable challenge. For example, a reader searching on Bloomberg’s system for news about the U.K. would find 10,000 articles on a typical day. Apple Inc., the world’s most journalistically covered company, garners around 1,800 news articles a day. We realized that a new kind of summarization engine was needed, one that… 
News Headline Grouping as a Challenging NLU Task
TLDR
A novel unsupervised Headline Generator Swap model is proposed for the task of HeadLine Grouping that achieves within 3 F-1 of the best supervised model, and high-performing models are analyzed with consistency tests, finding that models are not consistent in their predictions, revealing modeling limits of current architectures.
Cross-Register Projection for Headline Part of Speech Tagging
TLDR
This work automatically annotates news headlines with POS tags by projecting predicted tags from corresponding sentences in news bodies by training a multi-domain POS tagger on both long-form and headline text and shows that joint training on both registers improves over training on just one or naively concatenating training sets.
Contextualizing Trending Entities in News Stories
TLDR
It is found that the salience of a contextual entity and how coherent it is with respect to the news story are strong indicators of relevance in both unsupervised and supervised settings.
Motivating High Performance Serverless Workloads
TLDR
This work describes how each application can be cast in a serverless software architecture and how the application performance requirements translate into high performance requirements (invocation rate, low and predictable latency) for the underlying serverless system implementation.

References

SHOWING 1-10 OF 22 REFERENCES
Scalable clustering of news search results
TLDR
A system that clusters the search results of a news search system in a fast and scalable manner including offline clustering, incremental clustering and realtime clustering is presented.
Open Information Extraction from the Web
TLDR
Open IE (OIE), a new extraction paradigm where the system makes a single data-driven pass over its corpus and extracts a large set of relational tuples without requiring any human input, is introduced.
ROUGE: A Package for Automatic Evaluation of Summaries
TLDR
Four different RouGE measures are introduced: ROUGE-N, ROUge-L, R OUGE-W, and ROUAGE-S included in the Rouge summarization evaluation package and their evaluations.
A Simple but Tough-to-Beat Baseline for Sentence Embeddings
A Framework for Clustering Massive Text and Categorical Data Streams
TLDR
This work presents an online approach for clustering massive text and categorical data streams with the use of a statistical summarization methodology and presents results illustrating the effectiveness of the technique.
Sentence Compression by Deletion with LSTMs
TLDR
It is demonstrated that even the most basic version of the LSTM system, given no syntactic information or desired compression length, performs surprisingly well: around 30% of the compressions from a large test set could be regenerated.
Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics
TLDR
The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.
Latent Dirichlet Allocation
Fast and Robust Compressive Summarization with Dual Decomposition and Multi-Task Learning
TLDR
This work presents a dual decomposition framework for multi-document summarization, using a model that jointly extracts and compresses sentences, and proposes a multi-task learning framework to take advantage of existing data for extractive summarization and sentence compression.
Sequence to Sequence Learning with Neural Networks
TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
...
1
2
3
...