• Corpus ID: 2017135

Dialog-based Language Learning

@inproceedings{Weston2016DialogbasedLL,
  title={Dialog-based Language Learning},
  author={Jason Weston},
  booktitle={NIPS},
  year={2016}
}
  • J. Weston
  • Published in NIPS 20 April 2016
  • Computer Science
A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based… 

Figures and Tables from this paper

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Listen, Interact and Talk: Learning to Speak via Interaction

This paper presents an interactive setting for grounded natural language learning, where an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation.

Training Language Models with Language Feedback

This work proposes to learn from natural language feedback, which conveys more information per human evaluation, from a GPT-3 model to roughly human-level summarization ability using a three-step learning algorithm.

Dialog-to-Action: Conversational Question Answering Over a Large-Scale Knowledge Base

An approach to map utterances in conversation to logical forms, which will be executed on a large-scale knowledge base, and shows that the semantic parsing-based approach outperforms a memory network based encoder-decoder model by a huge margin.

Extracting Dialog Structure and Latent Beliefs from Dialog Corpus

The method identifies the latent beliefs in conversations and uses them to appropriately tailor the chat-bot’s responses based on the extracted finite state machine, and can lead to better conversational experience with a chatbot.

Generative Deep Neural Networks for Dialogue: A Short Review

Recently proposed models based on generative encoder-decoder neural network architectures are reviewed and it is shown that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.

Latent Intention Dialogue Models

The experimental evaluation of the proposed Latent Intention Dialogue Model shows that the model out-performs published benchmarks for both corpus-based and human evaluation, demonstrating the effectiveness of discrete latent variable models for learning goal-oriented dialogues.

Learning and Knowledge Transfer with Memory Networks for Machine Comprehension

A novel curriculum inspired training procedure for Memory Networks is proposed to improve the performance for machine comprehension with relatively small volumes of training data and the use of a loss function to incorporate the asymmetric nature of knowledge transfer is suggested.

Unseen Filler Generalization In Attention-based Natural Language Reasoning Models

This paper argues by experiment analysis that several existing attention-based models have a hard time generalizing themselves to handle name entities not seen in the training data, and proposes Unseen Filler Generalization (UFG) as a task along with two new datasets to evaluate the filler generalization capability of a natural language reasoning model.

Learning from Dialogue after Deployment: Feed Yourself, Chatbot!

On the PersonaChat chit-chat dataset with over 131k training examples, it is found that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.
...

References

SHOWING 1-10 OF 38 REFERENCES

Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems

This work proposes a suite of new tasks that test the ability of models to answer factual questions, provide personalization, carry short conversations about the two, and finally to perform on natural dialogs from Reddit.

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems.

Learning from natural instructions

The process of learning a decision function as a natural language lesson interpretation problem, as opposed to learning from labeled examples, is suggested to view.

Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases

This work defines task prediction as a classification problem, rather than “parsing” and use semantic contexts to improve classification accuracy and helps make a dialog agent more robust to user input and helps reduce number of turns required to detected intended tasks.

Learning Knowledge Graphs for Question Answering through Conversational Dialog

This work is the first to acquire knowledge for question-answering from open, natural language dialogs without a fixed ontology or domain model that predetermines what users can say.

End-To-End Memory Networks

A neural network with a recurrent attention model over a possibly large external memory that is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings.

Memory Networks

This work describes a new class of learning models called memory networks, which reason with inference components combined with a long-term memory component; they learn how to use these jointly.

Sequence Level Training with Recurrent Neural Networks

This work proposes a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE, and outperforms several strong baselines for greedy generation.

Reinforcement Learning for Adaptive Dialogue Systems - A Data-driven Methodology for Dialogue Management and Natural Language Generation

A new methodology for developing spoken dialogue systems is described in detail, and methods for learning from the data, for building simulation environments for training and testing systems, and for evaluating the results are explored.

Reward Shaping with Recurrent Neural Networks for Speeding up On-Line Policy Learning in Spoken Dialogue Systems

Three recurrent neural network approaches are examined for providing reward shaping information in addition to the primary (task-orientated) environmental feedback in both simulated and real user scenarios to increase policy learning speed.