• Corpus ID: 70239846

A Bi-Encoder LSTM Model For Learning Unstructured Dialogs

@article{Shekhar2021ABL,
  title={A Bi-Encoder LSTM Model For Learning Unstructured Dialogs},
  author={Diwanshu Shekhar and Pooran Singh Negi and Mohammad H. Mahoor},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.12269}
}
Creating a data-driven model that is trained on a large dataset of unstructured dialogs is a crucial step in developing a Retrieval-based Chatbot systems. [] Key Result We also show results on experiments performed by using several similarity functions, model hyper-parameters and word embeddings on the proposed architecture.

References

SHOWING 1-10 OF 55 REFERENCES

The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems

This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique

A Dataset for Research on Short-Text Conversations

This paper introduces a dataset of short-text conversation based on the real-world instances from Sina Weibo, which provides rich collection of instances for the research on finding natural and relevant short responses to a given short text, and useful for both training and testing of conversation models.

Improved Deep Learning Baselines for Ubuntu Corpus Dialogs

An in-house implementation of previously reported models are used to do an independent evaluation and an ensemble is created by averaging predictions of multiple models to achieve a state-of-the-art result for the next utterance ranking on the Ubuntu Dialog Corpus.

Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus

In this paper, we construct and train end-to-end neural network-based dialogue systems using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn

DocChat: An Information Retrieval Approach for Chatbot Engines Using Unstructured Documents

This paper presents DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances.

Filter, Rank, and Transfer the Knowledge: Learning to Chat

A three phase ranking approach for predicting suitable responses to a query in a conversation is used, and sentences are first filtered, then efficiently ranked, and then more precisely re-ranked in order to select the most suitable response.

Sequence to Sequence Learning with Neural Networks

This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

A statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure that can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates.

A unified architecture for natural language processing: deep neural networks with multitask learning

We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic

A Survey of Available Corpora for Building Data-Driven Dialogue Systems

A wide survey of publicly available datasets suitable for data-driven learning of dialogue systems is carried out and important characteristics of these datasets are discussed and how they can be used to learn diverse dialogue strategies.
...