Probing Commonsense Explanation in Dialogue Response Generation

@inproceedings{Zhou2021ProbingCE,
  title={Probing Commonsense Explanation in Dialogue Response Generation},
  author={Pei Zhou and Pegah Jandaghi and Bill Yuchen Lin and Justin Cho and Jay Pujara and Xiang Ren},
  booktitle={EMNLP},
  year={2021}
}
Humans use commonsense reasoning (CSR) implicitly to produce natural and coherent responses in conversations. Aiming to close the gap between current response generation (RG) models and human communication abilities, we want to understand why RG models respond as they do by probing RG model’s understanding of commonsense reasoning that elicits proper responses. We formalize the problem by framing commonsense as a latent variable in the RG task and using explanations for responses as textual… 

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

It is proposed to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the differences in summarizing them and generate more informative and consistent summaries than exist-ing methods.

ProsocialDialog: A Prosocial Backbone for Conversational Agents

This work introduces P ROSOCIAL D IALOG, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms, and introduces a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.

Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

A novel commonsense reasoning metric, Non-Replacement Confidence (NRC), which works on PLMs according to the Replaced Token Detection pre-training objective in ELECTRA, and shows that pre-endowed commonsense knowledge, especially for RTD-based PLMs, is essential in downstream reasoning.

Coalescing Global and Local Information for Procedural Text Understanding

A new model that builds entity- and timestep-aware input representations (local input) considering the whole context (global input) and jointly model the entity states with a structured prediction objective (global output) is proposed, which optimizes for both precision and recall.

ConceptNet infused DialoGPT for Underlying Commonsense Understanding and Reasoning in Dialogue Response Generation

The pre-trained conversational models still fail to capture the implicit commonsense (CS) knowledge hidden in the dialogue interaction, even though they were pre-trained with an enormous dataset. In

References

SHOWING 1-10 OF 45 REFERENCES

Commonsense-Focused Dialogues for Response Generation: An Empirical Study

This paper auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph, and proposes an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pre-trained language and dialog models, and shows reasonable correlation with human evaluation of responses’ commonsense quality.

Commonsense Knowledge Aware Conversation Generation with Graph Attention

This is the first attempt that uses large-scale commonsense knowledge in conversation generation, and unlike existing models that use knowledge triples (entities) separately and independently, this model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs.

Wizard of Wikipedia: Knowledge-Powered Conversational agents

The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.

RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge

RIDDLESENSE1, a new multiple-choice question answering task, is presented, which comes with the first large dataset (5.7k examples) for answering riddlestyle commonsense questions and it is pointed out that there is a large gap between the bestsupervised model and human performance.

Differentiable Open-Ended Commonsense Reasoning

DrFact is proposed, an efficient Differentiable model for multi-hop Reasoning over knowledge Facts, which outperforms strong baseline methods by a large margin and is evaluated to evaluate OpenCSR methods.

Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs

A new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows, demonstrating ConceptFlow’s effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures.

Social IQA: Commonsense Reasoning about Social Interactions

It is established that Social IQa, the first large-scale benchmark for commonsense reasoning about social situations, is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap).

MuTual: A Dataset for Multi-Turn Dialogue Reasoning

MuTual is introduced, a novel dataset for Multi-Turn dialogue Reasoning, consisting of 8,860 manually annotated dialogues based on Chinese student English listening comprehension exams, which shows that there is ample room for improving reasoning ability.

Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset

This work proposes a new benchmark for empathetic dialogue generation and EmpatheticDialogues, a novel dataset of 25k conversations grounded in emotional situations, and presents empirical comparisons of dialogue model adaptations forEmpathetic responding, leveraging existing models or datasets without requiring lengthy re-training of the full model.

Conversational Neuro-Symbolic Commonsense Reasoning

An interactive conversational framework built on the authors' neuro-symbolic system, that conversationally evokes commonsense knowledge from humans to complete its reasoning chains, and presents a neuro-Symbolic theorem prover that extracts multi-hop reasoning chains for this problem.