Learn More
In this paper we highlight a selection of features of scientific text which distinguish it from news stories. We argue that features such as structure, selective use of past tense, voice and stylistic conventions can affect question answering in the scientific domain. We demonstrate this through qualitative observations made while working on retrieving(More)
This article outlines the participation of the Documents and Linguistic Technology (DLT) Group in the Cross Language French-English Question Answering Task of the Cross Language Evaluation Forum (CLEF). Our aim was to make an initial study of cross language question answering (QA) by adapting the system built for monolingual English QA for the Text(More)
This article outlines our participation in the Question Answering Track of the Text REtrieval Conference organised by the National Institute of Standards and Technology. This was our second year in the track and we hoped to improve our performance relative to 2002. In the next section we outline the general strategy we adopted, the changes relative to last(More)
This stage is almost identical to last year. We start off by tagging the Query for part-of-speech using XeLDA (2004). We then carry out shallow parsing looking for various types of phrase. Each phrase is then translated using three different methods. Two translation engines and one dictionary are used. The engines are Reverso (2004) and WorldLingo (2004)(More)
The basic architecture of our factoid system is standard in nature and comprises query type identification, query analysis and translation, retrieval query formulation, document retrieval, text file parsing, named entity recognition and answer entity selection. Factoid classification into 69 query types is carried out using keywords. Associated with each(More)
Factoids were the first type of question to appear at TREC and they are still the most frequent, with 362 appearing in the current test collection. Each asks for a single piece of information such as a name or a date. The key to our strategy in processing factoids (in common with most other participants) is to predict the expected type of answer (e.g. a(More)