• Corpus ID: 76665205

Consistent Dialogue Generation with Self-supervised Feature Learning

@article{Zhang2019ConsistentDG,
  title={Consistent Dialogue Generation with Self-supervised Feature Learning},
  author={Yizhe Zhang and Xiang Gao and Sungjin Lee and Chris Brockett and Michel Galley and Jianfeng Gao and William B. Dolan},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.05759}
}
Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents. [] Key Method Unlike past work that requires external supervision such as user identities, which are often unavailable or classified as sensitive information, our approach trains topic and persona feature extractors in a self-supervised way by utilizing the natural structure of dialogue data.

Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues

This paper proposes learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models and jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.

DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation

It is shown that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems.

Will I Sound like Me? Improving Persona Consistency in Dialogues through Pragmatic Self-Consciousness

Inspired by social cognition and pragmatics, existing dialogue agents are endow with public self-consciousness on the fly through an imaginary listener to enforce dialogue agents to refrain from uttering contradiction and improve consistency of existing dialogue models.

Towards Building an Intelligent Chatbot for Customer Service: Learning to Respond at the Appropriate Time

A multi-turn response triggering model (MRTM) is proposed that leverages the semantic matching relationships between the context and the response to train a semantic matching model and obtains the weights of the co-occurring utterances in the context through an asymmetrical self-attention mechanism.

Dual Task Framework for Improving Persona-grounded Dialogue Dataset

This paper augment relevant personas to improve dialogue dataset/agent, by leveraging the primal-dual structure of the two tasks, predicting dialogue responses and personas based on each other, which is orthogonally applicable to any dialogue model.

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

A new end-to-end approach to contentful neural conversation that jointly models response generation and on-demand machine reading is presented, allowing for more focused integration of external knowledge than has been possible in prior approaches.

Challenges in Building Intelligent Open-domain Dialog Systems

This article reviews the recent work on neural approaches that are devoted to addressing three challenges in developing intelligent open-domain dialog systems: semantics, consistency, and interactiveness.

I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling

The DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues are introduced and it is shown that the best contradiction detection model correlates well with human judgments and is used in both automatically evaluating and improving the consistency of state-of-the-art generative chatbots.

Structuring Latent Spaces for Stylized Response Generation

StyleFusion is proposed, which bridges conversation modeling and non-parallel style transfer by sharing a structured latent space that allows the system to generate stylized relevant responses by sampling in the neighborhood of the conversation model prediction, and continuously control the style level.

Conversational Semantic Role Labeling

Experiments show that while traditional SRL systems perform poorly for analyzing dialogues, modeling dialogue histories and participants greatly helps the performance, indicating that adapting SRL to conversations is very promising for universal dialogue understanding.

References

SHOWING 1-10 OF 52 REFERENCES

DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation

It is shown that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems.

Conversational Contextual Cues: The Case of Personalization and History for Response Ranking

This work evaluates its models on the task of predicting the next response in a conversation, and finds that modeling both context and participants improves prediction accuracy.

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

A neural network-based generative architecture, with stochastic latent variables that span a variable number of time steps, that improves upon recently proposed models and that the latent variables facilitate both the generation of meaningful, long and diverse responses and maintaining dialogue state is proposed.

Assigning Personality/Profile to a Chatting Machine for Coherent Conversation Generation

Manual and automatic evaluation shows that the model can deliver more coherent, natural, and diversified responses to profile-coherent responses.

Long Text Generation via Adversarial Training with Leaked Information

The discriminative net is allowed to leak its own high-level extracted features to the generative net to further help the guidance, and without any supervision, LeakGAN would be able to implicitly learn sentence structures only through the interaction between Manager and Worker.

Generating Informative Responses with Controlled Sentence Function

This model utilizes a continuous latent variable to capture various word patterns that realize the expected sentence function, and introduces a type controller to deal with the compatibility of controlling sentence function and generating informative content.

A Diversity-Promoting Objective Function for Neural Conversation Models

This work proposes using Maximum Mutual Information (MMI) as the objective function in neural models, and demonstrates that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.

Topic Aware Neural Response Generation

A topic aware sequence-to-sequence (TA-Seq2Seq) model that utilizes topics to simulate prior human knowledge that guides them to form informative and interesting responses in conversation, and leverages topic information in generation by a joint attention mechanism and a biased generation probability.

Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation

Two novel models are presented, DI-VAE and DI-VST, that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting and enhance encoder-decoder models with interpretable generation.

Steering Output Style and Topic in Neural Response Generation

The decompose the neural generation process into empirically easier sub-problems: a faithfulness model and a decoding method based on selective-sampling and training and sampling algorithms that bias the generation process with a specific language style restriction, or a topic restriction.
...