Fumito Masui

Learn More
In this paper we describe an evaluation of question answering task, Question Answering Challenge 2 (QAC2). This evaluation project was first carried out at the NTCIR Workshop 3 in October 2002. One objective of the QAC was to develop practical QA systems in a general domain by focusing on research relating to user interaction and information extraction. Our(More)
In this paper we describe the Question Answering Challenge (QAC), a question answering task, and its first evaluation (QAC1). The project was carried out as a task of the NTCIR Workshop 3 in October 2002. One objective of the QAC was to develop practical QA systems in a general domain by focusing on research relating to user interaction and information(More)
In this paper we describe a question answering task, called Question Answering Challenge (QAC), and its first evaluation (QAC1). This was carried out as a task of NTCIR Workshop 3 in October 2002. In QAC, we aimed to encourage the development of practical QA systems in a general domain and focus on research of user interaction and information extraction.(More)
This paper provides an overview of NTCIR-5 QAC3 (Question Answering Challenge 3). QAC is a series of challenges for evaluating question answering technologies in Japanese. QAC3 follows the same course as QAC based on the success of the previous two workshops, with its task limited to that corresponding to QAC2 Subtask 3 aiming at the convergence of research(More)
In QAC-4, we defined question answering task using any type of question, mainly focused on non-factoid questions. There are 8 participants and 14 runs from these participants. In the evaluation, four kinds of criterion were used for some portion of participants answer set. The evaluation results showed some of the participant systems could focus on the area(More)
QACIAD (Question Answering Challenge for Information Access Dialogue) is an evaluation framework for measuring interactive question answering (QA) technologies. It assumes that users interactively collect information using a QA system for writing a report on a given topic and evaluates, among other things, the capabilities needed under such circumstances.(More)
We describe an overview of Question Answering Challenge (QAC) 2 Subtask 3, a novel challenge for evaluating open-domain question answering technologies, at the NTCIR Workshop 4. In QAC2 Subtask 3, question answering systems are supposed to be used interactively to answer a series of related questions, whereas in the conventional setting, systems answer(More)
Oki MET system [3][4] is based on term recognition rules of surface linguistic expressions and parse trees which is generated by the parsing module of MT system. The MET system rstly identi es term boundaries by Japanese character types. Named Entities are recognized using a Japanese su x list and a word list for each NE type. After recognition of surface(More)
A novel challenge for evaluating open-domain question answering technologies is proposed. In this challenge, question answering systems are supposed to be used interactively to answer a series of related questions, whereas in the conventional setting, systems answer isolated questions one by one. Such an interaction occurs in the case of gathering(More)