Learn More
We describe a method which uses one or more intermediary languages in order to automatically generate translation dictionaries. Such a method could potentially be used to efficiently create translation dictionaries for language groups which have as yet had little interaction. For any given word in the source language, our method involves first translating(More)
This report describes the experiments of the University of Edinburgh and the University of Sydney at the TREC-2004 question answering evaluation exercise. Our system combines two approaches: one with deep linguistic analysis using IR on the AQUAINT corpus applied to answer extraction from text passages, and one with a shallow linguistic analysis and shallow(More)
We show how to adapt an existing monolingual open-domain QA system to perform in a cross-lingual environment, using off-the-shelf machine translation software. In our experiments we use French and German as source language, and English as target language. For answering factoid questions, our system performs with an accuracy of 16% (German to English) and(More)
This report describes the system developed by the University of Edinburgh and the University of Sydney for the TREC-2005 question answering evaluation exercise. The backbone of our question-answering platform is QED, a linguistically-principled QA system. We experimented with external sources of knowledge, such as Google and Wikipedia, to enhance the(More)
We present improvements and modifications of the QED open-domain question answering system developed for TREC-2003 to make it cross-lingual for participation in the CrossLinguistic Evaluation Forum (CLEF) Question Answering Track 2004 for the source languages French and German and the target language English. We use rule-based question translation extended(More)
Factoid Question Answering has attracted much research interest in recent years. The performances of the state of the art factoid QA systems in terms of the correctness of answers seem to be approaching reasonabe level as shown by TREC QA exercises [2] to make Question Answering nearly viable for practical uses. However, one important issue that has(More)
The method of Topic Indexing and Retrieval for QA persented in this paper enables fast and efficent QA for questions with named entity answers. This is achieved by identifying all possible named entity answers in a corpus off-line and gathering all possible evidence for their direct retrieval as answer candidates using standard IR techniques. An evaluation(More)
This paper presents methods for answering, what we call, Cross-passage Evidence Questions. These questions require multiply scattered passages all bearing different and partial evidence for the answers. This poses special challenges to the textual QA systems that employ information retrieval in the “conventional” way because the ensuing Answer Extraction(More)