Can Neural Generators for Dialogue Learn Sentence Planning and Discourse Structuring?

@article{Reed2018CanNG,
  title={Can Neural Generators for Dialogue Learn Sentence Planning and Discourse Structuring?},
  author={Lena I. Reed and Shereen Oraby and Marilyn A. Walker},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.03015}
}
Responses in task-oriented dialogue systems often realize multiple propositions whose ultimate form depends on the use of sentence planning and discourse structuring operations. [] Key Method We systematically create large training corpora that exhibit particular sentence planning operations and then test neural models to see what they learn. We compare models without explicit latent variables for sentence planning with ones that provide explicit supervision during training. We show that only the models with…

Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue

This paper proposes using tree-structured semantic representations, like those used in traditional rule-based NLG systems, for better discourse-level structuring and sentence-level planning, and introduces a challenging dataset using this representation for the weather domain.

Learning from Mistakes: Combining Ontologies via Self-Training for Dialogue Generation

This work explores, for the first time, whether it is possible to train an NLG for a new larger ontology using existing training sets for the restaurant domain, where each set is based on a different ontology.

A Tree-to-Sequence Model for Neural NLG in Task-Oriented Dialog

A tree-to-sequence model that uses a tree-LSTM encoder to leverage the tree structures in the input MR, and further enhance the decoding by a structure-enhanced attention mechanism is proposed and is more data-efficient and generalizes better to hard scenarios.

DSNNLG 2019 1st Workshop on Discourse Structure in Neural NLG Proceedings of the Workshop

  • Computer Science
  • 2019
A huge performance improvement in both stylistic control and semantic accuracy over the state of the art on two stylistic benchmark tasks, generating language that exhibits variation in personality, and generating discourse contrast.

Fine-Grained Control of Sentence Segmentation and Entity Positioning in Neural NLG

This paper introduces fine-grained control of sentence planning in neural data-to-text generation models at two levels - realization of input entities in desired sentences and realization of theinput entities in the desired position among individual sentences.

Identifying Untrustworthy Samples: Data Filtering for Open-domain Dialogues with Bayesian Optimization

This paper presents a data filtering method for open-domain dialogues, which identifies untrustworthy samples from training data with a quality measure that linearly combines seven dialogue attributes, and proposes a training framework that integrates maximum likelihood estimation (MLE) and negative training method (NEG).

Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG

YelpNLG is presented, a corpus of 300,000 rich, parallel meaning representations and highly stylistically varied reference texts spanning different restaurant attributes, and a novel methodology that can be scalably reused to generate NLG datasets for other domains is described.

AggGen: Ordering and Aggregating while Generating

Experiments on the WebNLG and E2E challenge data show that by using fact-based alignments the approach is more interpretable, expressive, robust to noise, and easier to control, while retaining the advantages of end-to-end systems in terms of fluency.

PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation

This work presents a novel content-controlled text generation framework, PAIR, with planning and iterative refinement, which is built upon a large model, BART, and proposes a refinement algorithm to gradually enhance the generation quality within the sequence-to-sequence framework.

Maximizing Stylistic Control and Semantic Accuracy in NLG: Personality Variation and Discourse Contrast

A huge performance improvement in both stylistic control and semantic accuracy over the state of the art on two stylistic benchmark tasks, generating language that exhibits variation in personality, and generating discourse contrast.

References

SHOWING 1-10 OF 60 REFERENCES

To Plan or not to Plan? Discourse Planning in Slot-Value Informed Sequence to Sequence Models for Language Generation

This work investigates sequence-to-sequence (seq2seq) models in which slot values are included as part of the input sequence and the output surface form, and investigates whether a separate sentence planning module that decides on grouping of slot value mentions as input to the seq2seq model results in more natural sentences than a seq1seq model that aims to jointly learn the plan and the surface realization.

Training a sentence planner for spoken dialogue using boosting

Individual and Domain Adaptation in Sentence Planning for Dialogue

This paper presents and evaluates a trainable sentence planner for providing restaurant information in the MATCH dialogue system, and provides the first demonstration of individual preferences for sentence planning operations, affecting the content order, discourse structure and sentence structure of system responses.

Investigating Linguistic Pattern Ordering In Hierarchical Natural Language Generation

The experiments show that the proposed method significantly outperforms the traditional seq2seq model with a smaller model size, and the design of the hierarchical attentional decoder can be applied to various NLG systems.

Neural MultiVoice Models for Expressing Novel Personalities in Dialog

It is shown that a model that is trained to achieve a single stylis- tic personality target can produce outputs that combine stylistic targets, and that contrary to the authors' predictions, the learned models do not always simply interpolate model parameters, but rather produce styles that are distinct, and novel from the personalities they were trained on.

Planning Text for Advisory Dialogues: Capturing Intentional and Rhetorical Information

It is argued that, to handle explanation dialogues successfully, a discourse model must include information about the intended effect of individual parts of the text on the hearer, as well as how the parts relate to one another rhetorically.

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

A statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure that can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates.

Polite Dialogue Generation Without Parallel Data

Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.

Trainable Sentence Planning for Complex Information Presentations in Spoken Dialog Systems

It is shown that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.
...