• Corpus ID: 35224858

BioASQ and PubAnnotation : Using linked annotations in biomedical question answering

  title={BioASQ and PubAnnotation : Using linked annotations in biomedical question answering},
  author={Anastasios Nentidis and Zi Yang and Mariana L. Neves and Jin-Dong Kim and Anastasia Krithara and Georgios Paliouras and I. Kakadiaris},
Motivation: The motivation for this proposal is to extrinsically evaluate the resources available in PubAnnotation and investigate the potential of this repository as an external component of systems with direct real-world biomedical applications, in particular biomedical question answering. Approach: In this regard, we propose to adjust biomedical question answering systems to take advantage of linked annotations available in PubAnnotation. Those systems can be used to answer biomedical… 


Learning to Answer Biomedical Questions: OAQA at BioASQ 4B
The system extends the Yang et al. (2015) system and integrates additional biomedical and generalpurpose NLP annotators, machine learning modules for search result scoring, collective answer reranking, and yes/no answer prediction.
BioMedLAT Corpus: Annotation of the Lexical Answer Type for Biomedical Questions
Question answering (QA) systems need to provide exact answers for the questions that are posed to the system. However, this can only be achieved through a precise processing of the question. During
An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition
Overall, BioASQ helped obtain a unified view of how techniques from text classification, semantic indexing, document and passage retrieval, question answering, and text summarization can be combined to allow biomedical experts to obtain concise, user-understandable answers to questions reflecting their real information needs.
Learning to Answer Biomedical Factoid & List Questions: OAQA at BioASQ 3B
This paper describes the CMU OAQA system evaluated in the BioASQ 3B Question Answering track. We first present a three-layered architecture, and then describe the components integrated for exact
Results of the BioASQ Tasks of the Question Answering Lab at CLEF 2015
The aim of this paper is to give an overview of the data issued during the BioASQ track of the Question Answering Lab at CLEF 2014, and to present the systems that participated in the challenge and for which they received system descriptions.
Results of the 4th edition of BioASQ Challenge
The data used during the BIOASQ challenge is presented as well as the technologies which were at the core of the participants’ frameworks, suggesting that advances over the state of the art were achieved through the BioASQChallenge but also that the benchmark in itself is very challenging.