Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

@inproceedings{Wen2015SemanticallyCL,
  title={Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems},
  author={Tsung-Hsien Wen and Milica Gasic and Nikola Mrksic and Pei-hao Su and David Vandyke and Steve J. Young},
  booktitle={EMNLP},
  year={2015}
}
© 2015 Association for Computational Linguistics. [...] Key Method The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on…Expand
Recurrent neural network language generation for spoken dialogue systems
TLDR
RNNLG is presented, a Recurrent Neural Network (RNN)-based statistical natural language generator that can learn to generate utterances directly from dialogue act – utterance pairs without any predefined syntaxes or semantic alignments.
Context-aware Natural Language Generation for Spoken Dialogue Systems
TLDR
A Context-Aware LSTM model for NLG is proposed, which is completely driven by data without manual designed templates or rules, and obtains state-of-the-art performance.
RNN Based Language Generation Models for a Hindi Dialogue System
TLDR
Recurrent Neural Network Language Generation (RNNLG) framework based models are presented along with their analysis of how they extract intended meaning in terms of content planning and surface realization on a proposed unaligned Hindi dataset.
Multi-domain Neural Network Language Generation for Spoken Dialogue Systems
TLDR
This paper proposes a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps, and shows that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains.
Augmenting Natural Language Generation with external memory modules in Spoken Dialogue Systems
Semantically Conditioned LSTM (SC-LSTM) is one of the state-of-the-art models in the Natural Language Generation of the Spoken Dialogue Systems. Though it has a Dialogue Act (DA) cell which enables
Multi-task Learning for Natural Language Generation in Task-Oriented Dialogue
TLDR
A novel multi-task learning framework, NLG-LM, for natural language generation that explicitly targets for naturalness in generated responses via an unconditioned language model, which can significantly improve the learning of style and variation in human language.
Semantic Refinement GRU-Based Neural Language Generation for Spoken Dialogue Systems
TLDR
A new approach to NLG is presented by using recurrent neural networks (RNN), in which a gating mechanism is applied before RNN computation, which allows the proposed model to generate appropriate sentences.
Natural Language Generation for Spoken Dialogue System using RNN Encoder-Decoder Networks
TLDR
A Recurrent Neural Network based Encoder-Decoder architecture is presented, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances.
Investigating Linguistic Pattern Ordering In Hierarchical Natural Language Generation
TLDR
The experiments show that the proposed method significantly outperforms the traditional seq2seq model with a smaller model size, and the design of the hierarchical attentional decoder can be applied to various NLG systems.
An Improved LSTM Structure for Natural Language Processing
  • Lirong Yao, Yazhuo Guan
  • Computer Science
    2018 IEEE International Conference of Safety Produce Informatization (IICSPI)
  • 2018
TLDR
An improved NLP method based on long short-term memory (LSTM) structure, whose parameters are randomly discarded when they are passed backwards in the recursive projection layer, which indicates that the method is more suitable for NLP in limited computing resources and high amount of data.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 61 REFERENCES
Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking
TLDR
Results of an evaluation by human judges indicate that the new statistical language generator based on a joint recurrent and convolutional neural network structure produces not only high quality but linguistically varied utterances which are preferred compared to n-gram and rule-based systems.
Trainable approaches to surface natural language generation and their application to conversational dialog systems
TLDR
How decisions for word ordering and word choice in surface natural language generation can be automatically learned from annotated data is studied to find the highest probability word sequence that is consistent with the rules and conditions of the grammar.
Training a sentence planner for spoken dialogue using boosting
TLDR
SPoT, a trainable sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges, which shows that SPiT performs better than the rule-based systems and the baselines, and as well as the hand-crafted system.
Stochastic Language Generation in Dialogue using Factored Language Models
TLDR
Bagel is presented, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs).
Natural Language Generation as Planning Under Uncertainty for Spoken Dialogue Systems
TLDR
A new model for Natural Language Generation (NLG) in Spoken Dialogue Systems is presented and evaluated, based on statistical planning, given noisy feedback from the current generation context, which significantly outperforms all the prior approaches.
Individual and Domain Adaptation in Sentence Planning for Dialogue
TLDR
This paper presents and evaluates a trainable sentence planner for providing restaurant information in the MATCH dialogue system, and provides the first demonstration of individual preferences for sentence planning operations, affecting the content order, discourse structure and sentence structure of system responses.
Phrase-Based Statistical Language Generation Using Graphical Models and Active Learning
TLDR
Bagel is presented, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators, and can generate natural and informative utterances from unseen inputs in the information presentation domain.
A Statistical NLG Framework for Aggregated Planning and Realization
TLDR
It is argued that the statistical approach to NLG reduces the need for complicated knowledge-based architectures and readily adapts to different domains with reduced development time.
Controlling User Perceptions of Linguistic Style: Trainable Generation of Personality Traits
TLDR
Personage is described, a highly parameterizable language generator whose parameters are based on psychological findings about the linguistic reflexes of personality, and a novel SNLG method which uses parameter estimation models trained on personality-annotated data to predict the generation decisions required to convey any combination of scalar values along the five main dimensions of personality.
Stochastic Language Generation for Spoken Dialogue Systems
TLDR
This paper proposes a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems, that is based on template-based and rule-based NLG approaches.
...
1
2
3
4
5
...