Corpus ID: 24268030

KSU Team ’ s Dialogue System at the NTCIR-13 Short Text Conversation Task 2

  title={KSU Team ’ s Dialogue System at the NTCIR-13 Short Text Conversation Task 2},
  author={Yoichi Ishibashi},
In this paper, the methods and results by the team KSU for STC-2 task at NTCIR-13 are described. We implemented both retrieval-based methods and a generation-based method. In the retrieval-based methods, a comment text with high similarity with the given utterance text is obtained from Yahoo! News comments data, and the reply text to the comment text is returned as the response to the input. Two methods were implemented with different information used for retrieval. It was confirmed that the… Expand

Figures and Tables from this paper


Overview of the NTCIR-12 Short Text Conversation Task
The task definition, evaluation measures, test collections, and the evaluation results of all teams are reviewed, which show the main difference between the two subtasks lies in the sources and languages of the test collections. Expand
Analysis of Similarity Measures between Short Text for the NTCIR-12 Short Text Conversation Task
This study compares the state-of-the-art methods for estimating text similarities to investigate their performance in handling short text, specially, under the scenario of short text conversation and implements a conversation system using a million tweets crawled from Twitter. Expand
Sequence to Sequence Learning with Neural Networks
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. Expand
Neural Machine Translation with Latent Semantic of Image and Text
A neural machine translation model is proposed that introduces a continuous latent variable containing an underlying semantic extracted from texts and images that outperforms over the baseline in an English–German translation task. Expand
Zero-resource machine translation by multimodal encoder–decoder network with multimedia pivot
This work proposes an approach to build a neural machine translation system with no supervised resources (i.e., no parallel corpora) using multimodal embedded representation over texts and images using multimedia as the “pivot” and finds that an end-to-end model that simultaneously optimized both rank loss in multimodAL encoders and cross-entropy loss in decoders performed the best. Expand
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation
This work explores an interlingua inspired solution which jointly learns to do the following: encode X and Z to a common representation and decode Y from this common representation. Expand
Neural Machine Translation by Jointly Learning to Align and Translate
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. Expand
Doubly-Attentive Decoder for Multi-modal Neural Machine Translation
We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neuralExpand
Bidirectional recurrent neural networks
It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. Expand
Imagination Improves Multimodal Translation
This work decomposes multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations, and finds improvements if the translation model is trained on the external News Commentary parallel text dataset. Expand