Learn More
In TREC-10, we participated in the web track (only ad-hoc task) and the QA track (only main task). In the QA track, our QA system (SiteQ) has general architecture with three processing steps: question processing, passage selection and answer processing. The key technique is LSP’s (Lexico-Semantic Patterns) that are composed of linguistic entries and(More)
To resolve some of lexical disagreement problems between queries and FAQs, we propose a reliable FAQ retrieval system using query log clustering. On indexing time, the proposed system clusters the logs of users queries into predefined FAQ categories. To increase the precision and the recall rate of clustering, the proposed system adopts a new similarity(More)
In wireless sensor networks, when a sensor node detects events in the surrounding environment, the sensing period for learning detailed information is likely to be short. However, the short sensing cycle increases the data traffic of the sensor nodes in a routing path. Since the high traffic load causes a data queue overflow in the sensor nodes, important(More)
Anaphora in multi-modal dialogues have different aspects compared to the anaphora in language-only dialogues. They often refer to the items signified by a gesture or by visual means. In this paper, we define two kinds of anaphora: screen anaphora and referring anaphora, and propose two general methods to resolve these anaphora. One is a simple mapping(More)
A speech act is a linguistic action intended by a speaker. Speech act classification is an essential part of a dialogue understanding system because the speech act of an utterance is closely tied with the user's intention in the utterance. We propose a neural network model for Korean speech act classification. In addition, we propose a method that extracts(More)
We propose a Question-answering (QA) system in Korean that uses a predictive answer indexer. The predictive answer indexer, first, extracts all answer candidates in a document in indexing time. Then, it gives scores to the adjacent content words that are closely related with each answer candidate. Next, it stores the weighted content words with each(More)