• Corpus ID: 218581058

TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection

@article{Voorhees2020TRECCOVIDCA,
  title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection},
  author={Ellen M. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and William R. Hersh and Kyle Lo and Kirk Roberts and Ian Soboroff and Lucy Lu Wang},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.04474}
}
TREC-COVID is a community evaluation designed to build a test collection that captures the information needs of biomedical researchers using the scientific literature during a pandemic. One of the key characteristics of pandemic search is the accelerated rate of change: the topics of interest evolve as the pandemic progresses and the scientific literature in the area explodes. The COVID-19 pandemic provides an opportunity to capture this progression as it happens. TREC-COVID, in creating a test… 

Figures and Tables from this paper

TREC-COVID: Building a Pandemic Retrieval Test Collection
Assessing how good is a search engine has been an active area of development for more than three decades. During the COVID-19 pandemic however the rate of change in what people are interested in, and
Pandemic Literature Search: Finding Information on COVID-19
TLDR
This work investigates how to better rank information for pandemic information retrieval and proposes a novel end-to-end method for neural retrieval that could lead to a search system that aids scientists, clinicians, policymakers and others in finding reliable answers from the scientific literature.
Searching for scientific evidence in a pandemic: An overview of TREC-COVID
AWS CORD19-Search: A Scientific Literature Search Engine for COVID-19
TLDR
AWS CORD-19 Search (ACS) is presented, a public COVID-19 specific search engine that is powered by machine learning that provides a scalable solution to CO VID-19 researchers and policy makers in their search and discovery for answers to high priority scientific questions.
Impact of detecting clinical trial elements in exploration of COVID-19 literature
TLDR
This study finds that the relational concept selection filters the original retrieved collection in a way that decreases the proportion of unjudged documents and increases the precision, which means that the user is likely to be exposed to a larger number of relevant documents.
Rapidly Deploying a Neural Search Engine for the COVID-19 Open Research Dataset
The Neural Covidex is a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset (CORD-19) curated by the Allen
On the Quality of the TREC-COVID IR Test Collections
TLDR
The quality of the resulting TREC-COVID test collections are examined, and a critique of the state-of-the-art in building reusable IR test collections is offered.
Repurposing TREC-COVID Annotations to Answer the Key Questions of CORD-19
TLDR
This work repurposes the relevancy annotations for TREC-COVID tasks to identify journal articles in CORD-19 which are relevant to the key questions posed by Cord-19, and presents the methodology used to construct the new dataset.
Question Answering Systems for Covid-19
TLDR
The survey of QA systems-CovidQA, CAiRE (Center for Artificial Intelligence Research)-COVID system, CO-search semantic search engine, COVIDASK, RECORD (Research Engine for COVID Open Research Dataset) available for CO VID-19 are described.
A Comparative Analysis of System Features Used in the TREC-COVID Information Retrieval Challenge
TLDR
It is observed that fine-tuning datasets with relevance judgments, MS-MARCO, and CORD-19 document vectors was associated with improved performance in Round 2 but not in Round 5, and term expansion and the use of the narrative field in the TREC-COVID topics were associated with decreased system performance in both rounds.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 10 REFERENCES
Overview of the TREC 2012 Medical Records Track
TLDR
Top-performing groups each used some sort of vocabulary normalization device specific to the medical domain, supporting the hypothesis that language use within electronic health records is sufficiently different from general use to warrant domain-specific processing.
Overview of the TREC 2014 Clinical Decision Support Track
TLDR
The focus of the 2014 track was the retrieval of biomedical articles relevant for answering generic clinical questions about medical records, using short case reports, such as those published in biomedical articles, as idealized representations of actual medical records.
TREC genomics special issue overview
TLDR
This special issue is devoted to the TREC Genomics Track, which ran from 2003 to 2007, and has expanded in recent years with the growth of new infor-mation needs.
The Philosophy of Information Retrieval Evaluation
TLDR
The fundamental assumptions and appropriate uses of the Cranfield paradigm, especially as they apply in the context of the evaluation conferences, are reviewed.
On Building Fair and Reusable Test Collections using Bandit Techniques
TLDR
Analysis demonstrates that the greedy approach common to most bandit methods can be unfair even to the runs participating in the collection-building process when the judgment budget is small relative to the (unknown) number of relevant documents.
Improving retrieval performance by relevance feedback
TLDR
Prescriptions are given for conducting text retrieval operations iteratively using relevance feedback, and evaluation data are included to demonstrate the effectiveness of the various methods.
CORD-19: The COVID-19 Open Research Dataset
TLDR
The mechanics of dataset construction are described, highlighting challenges and key design decisions, an overview of how CORD-19 has been used, and several shared tasks built around the dataset are described.
Overview of the TREC 2019 Precision Medicine Track.
Alexandar Lazar, and Shubham Pant. Overview of the TREC 2017 Precision Medicine Track
  • The Twenty-Sixth Text REtrieval Conference Proceedings
  • 2017
Overview of the TREC 2017 Precision Medicine Track. In The Twenty-Sixth Text REtrieval Conference Proceedings (TREC 2017)
  • NIST Special Publication
  • 2017