Richard F. E. Sutcliffe

Learn More
This paper describes the Question Answering for Machine Reading (QA4MRE) task at the 2012 Cross Language Evaluation Forum. In the main task, systems answered multiple-choice questions on documents concerned with four different topics. There were also two pilot tasks, Processing Modality and Negation for Machine Reading, and Machine Reading on Biomedical(More)
Following the pilot Question Answering Track at CLEF 2003, a new evaluation exercise for multilingual QA systems took place in 2004. This paper reports on the novelties introduced in the new campaign and on participants’ results. Almost all the cross-language combinations between nine source languages and seven target languages were exploited to set up more(More)
This paper describes the first round of ResPubliQA, a Question Answering (QA) evaluation task over European legislation, proposed at the Cross Language Evaluation Forum (CLEF) 2009. The exercise consists of extracting a relevant paragraph of text that satisfies completely the information need expressed by a natural language question. The general goals of(More)
The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual(More)
This paper describes the Question Answering for Machine Reading (QA4MRE) Main Task at the 2013 Cross Language Evaluation Forum. In the main task, systems answered multiple-choice questions on documents concerned with four different topics. There were also two pilot tasks, Machine Reading on Biomedical Texts about Alzheimer's disease, and Japanese Entrance(More)
Having being proposed for the fourth time, the QA at CLEF track has confirmed a still raising interest from the research community, recording a constant increase both in the number of participants and submissions. In 2006, two pilot tasks, WiQA and AVE, were proposed beside the main tasks, representing two promising experiments for the future of QA. Also in(More)
The fifth QA campaign at CLEF, the first having been held in 2006. was characterized by continuity with the past and at the same time by innovation. In fact, topics were introduced, under which a number of Question-Answer pairs could be grouped in clusters, containing also co-references between them. Moreover, the systems were given the possibility to(More)
In this paper we highlight a selection of features of scientific text which distinguish it from news stories. We argue that features such as structure, selective use of past tense, voice and stylistic conventions can affect question answering in the scientific domain. We demonstrate this through qualitative observations made while working on retrieving(More)
The paper offers an overview of the key issues raised during the seven years’ activity of the Multilingual Question Answering Track at the Cross Language Evaluation Forum (CLEF). The general aim of the Multilingual Question Answering Track has been to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents(More)
This was the second year of the C@merata task [16,1] which relates natural language processing to music information retrieval. Participants each build a system which takes as input a query and a music score and produces as output one or more matching passages in the score. This year, questions were more difficult and scores were more complex. Participants(More)