Probing Commonsense Explanation in Dialogue Response Generation

  title={Probing Commonsense Explanation in Dialogue Response Generation},
  author={Pei Zhou and Pegah Jandaghi and Bill Yuchen Lin and Justin Cho and Jay Pujara and Xiang Ren},
Humans use commonsense reasoning (CSR) implicitly to produce natural and coherent responses in conversations. Aiming to close the gap between current response generation (RG) models and human communication abilities, we want to understand why RG models respond as they do by probing RG model’s understanding of commonsense reasoning that elicits proper responses. We formalize the problem by framing commonsense as a latent variable in the RG task and using explanations for responses as textual… 

ConceptNet infused DialoGPT for Underlying Commonsense Understanding and Reasoning in Dialogue Response Generation

The “two-way learning” method is proposed to enable the bidirectional relationship between CS knowledge and sentence pairs so that the model can generate a sentence given the CS triplets, also generate the underlying CS knowledge given a sentence.

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

Experimental results show that with injected commonsense knowledge, the SICK framework generates more informative and consistent summaries than existing methods.

ProsocialDialog: A Prosocial Backbone for Conversational Agents

This work introduces P ROSOCIAL D IALOG, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms, and introduces a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.

Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

A novel commonsense reasoning metric, Non-Replacement Confidence (NRC), which works on PLMs according to the Replaced Token Detection pre-training objective in ELECTRA, and shows that pre-endowed commonsense knowledge, especially for RTD-based PLMs, is essential in downstream reasoning.

Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics

It is discovered that different classes of DST models have clear strengths and weaknesses, where generation models are more promising for handling language variety while span-based classification models are less robust to unseen entities and each model class has distinct patterns of failure.

Coalescing Global and Local Information for Procedural Text Understanding

A new model Coalescing Global and Local Information is proposed, which builds entity- and timestep-aware input representations (local input) considering the whole context (global input) and jointly model the entity states with a structured prediction objective (global output) and simultaneously optimizes for both precision and recall.



Commonsense-Focused Dialogues for Response Generation: An Empirical Study

This paper auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph, and proposes an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pre-trained language and dialog models, and shows reasonable correlation with human evaluation of responses’ commonsense quality.

Commonsense Knowledge Aware Conversation Generation with Graph Attention

This is the first attempt that uses large-scale commonsense knowledge in conversation generation, and unlike existing models that use knowledge triples (entities) separately and independently, this model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs.

Wizard of Wikipedia: Knowledge-Powered Conversational agents

The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.

RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge

RIDDLESENSE1, a new multiple-choice question answering task, is presented, which comes with the first large dataset (5.7k examples) for answering riddlestyle commonsense questions and it is pointed out that there is a large gap between the bestsupervised model and human performance.

Differentiable Open-Ended Commonsense Reasoning

DrFact is proposed, an efficient Differentiable model for multi-hop Reasoning over knowledge Facts, which outperforms strong baseline methods by a large margin and is evaluated to evaluate OpenCSR methods.

Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs

A new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows, demonstrating ConceptFlow’s effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures.

Social IQA: Commonsense Reasoning about Social Interactions

It is established that Social IQa, the first large-scale benchmark for commonsense reasoning about social situations, is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap).

MuTual: A Dataset for Multi-Turn Dialogue Reasoning

MuTual is introduced, a novel dataset for Multi-Turn dialogue Reasoning, consisting of 8,860 manually annotated dialogues based on Chinese student English listening comprehension exams, which shows that there is ample room for improving reasoning ability.

Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset

This work proposes a new benchmark for empathetic dialogue generation and EmpatheticDialogues, a novel dataset of 25k conversations grounded in emotional situations, and presents empirical comparisons of dialogue model adaptations forEmpathetic responding, leveraging existing models or datasets without requiring lengthy re-training of the full model.

Conversational Neuro-Symbolic Commonsense Reasoning

An interactive conversational framework built on the authors' neuro-symbolic system, that conversationally evokes commonsense knowledge from humans to complete its reasoning chains, and presents a neuro-Symbolic theorem prover that extracts multi-hop reasoning chains for this problem.