Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training

@article{Zhou2021LearningFP,
  title={Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training},
  author={Wangchunshu Zhou and Qifei Li and Chenle Li},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.15171}
}
In this paper, we propose Inverse Adversarial Training (IAT) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better. In contrast to standard adversarial training algorithms, IAT encourages the model to be sensitive to the perturbation in the dialogue history and therefore learning from perturbations. By giving higher rewards for responses whose output probability reduces more significantly when dialogue history is perturbed, the model is… 

Figures and Tables from this paper

Diversifying Neural Dialogue Generation via Negative Distillation

This paper proposes a novel negative training paradigm, called negative distillation, to keep the model away from the undesirable generic responses while avoiding the above problems, and shows that this method outperforms previous negative training methods significantly.

An Empirical Study on the Overlapping Problem of Open-Domain Dialogue Datasets

This work observes the overlapping problem in DailyDialog and OpenSubtitles, two popular open-domain dialogue benchmark datasets, and shows that such overlapping can be exploited to obtain fake state-of-the-art performance.

Factual and Informative Review Generation for Explainable Recommendation

This work proposes to augment the generator with a personalized retriever, where the retriever’s output serves as external knowledge for enhancing the generator, and could generate explanations that more reliably entail existing re- views, are more diverse, and are rated more informative by human evaluators.

Sequential Topic Selection Model with Latent Variable for Topic-Grounded Dialogue

This paper proposes a novel approach, named SGTA, to exploit topic transition over all conversations in a sub-tle way for better modeling post-to-response topic-transition and guiding the response generation to the current conversation.

Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention

It is proved that a novel attention regularization loss to control the sharpness of the attention distribution, which is transparent to model structures and can be easily implemented within 20 lines of python code, could be mathematically regarded as learning a Bayesian approximation of posterior attention.

Recent Advances in Neural Text Generation: A Task-Agnostic Survey

A task-agnostic survey of recent advances in neural text generation is presented, which group under the following four headings: data construction, neural frameworks, training and inference strategies, and evaluation metrics.

References

SHOWING 1-10 OF 26 REFERENCES

Adversarial Learning for Neural Dialogue Generation

This work applies adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances, and investigates models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls.

DAL: Dual Adversarial Learning for Dialogue Generation

Experimental results demonstrate that DAL effectively improves both diversity and overall quality of the generated responses and outperforms state-of-the-art methods regarding automatic metrics and human evaluations.

Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

This work presents a novel framework based on conditional variational autoencoders that capture the discourse-level diversity in the encoder and uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders.

Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization

Adversarial Information Maximization (AIM), an adversarial learning framework that addresses informativeness and diversity, and explicitly optimizes a variational lower bound on pairwise mutual information between query and response.

Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints

This work proposes a simple yet effective approach for incorporating side information in the form of distributional constraints over the generated responses that generates responses that are much less generic without sacrificing plausibility.

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study

This paper takes an empirical approach to understanding how neural generative models use the available dialog history by studying the sensitivity of the models to artificially introduced unnatural changes or perturbations to their context at test time.

Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models

The recently proposed hierarchical recurrent encoder-decoder neural network is extended to the dialogue domain, and it is demonstrated that this model is competitive with state-of-the-art neural language models and back-off n-gram models.

Improving Neural Conversational Models with Entropy-Based Data Filtering

This work presents a method of filtering dialog datasets by removing generic utterances from training data using a simple entropy-based approach that does not require human supervision, and shows that training on datasets filtered this way results in better conversational quality as chatbots learn to output more diverse responses.

Diversity-Promoting GAN: A Cross-Entropy Based Generative Adversarial Network for Diversified Text Generation

A novel language-model based discriminator is proposed, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators.

Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models

Experimental results show that the proposed model correlates better with human preference compared with previous automated evaluation approaches, and is proposed to apply as a performance indicator during training for better hyperparameter tuning and early-stopping.