• Publications
  • Influence
Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models
An in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters, finds that models which have task-specificDecoder parameters outperform models where decoder parameter are fully shared across all tasks. Expand
A Full-Text Learning to Rank Dataset for Medical Information Retrieval
A dataset for learning to rank in the medical domain, consisting of thousands of full-text queries that are linked to thousands of research articles, demonstrates that ranking models trained on this dataset by far outperform standard bag-of-words retrieval models. Expand
Examining the State-of-the-Art in News Timeline Summarization
This paper compares different TLS strategies using appropriate evaluation frameworks, and proposes a simple and effective combination of methods that improves over the stateof-the-art on all tested benchmarks. Expand
A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal
This work presents a new dataset for MDS that is large both in the total number of document clusters and in the size of individual clusters, and provides a quantitative analysis of the dataset and empirical results for several state-of-the-art MDS techniques. Expand
DynE: Dynamic Ensemble Decoding for Multi-Document Summarization
This work proposes a simple decoding methodology which ensembles the output of multiple instances of the same model on different inputs, and obtains state-of-the-art results on several multi-document summarization datasets. Expand
Revisiting the Centroid-based Method: A Strong Baseline for Multi-Document Summarization
This paper applies the centroid-based model for extractive document summarization to possible summaries instead of sentences and uses a simple greedy algorithm to find the best summary, serving a higher performance over the orig- inal model on par with more complex state-of-the-art methods. Expand