• Corpus ID: 221339146

Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations

  title={Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations},
  author={Ashwin Paranjape and A. See and Kathleen Kenealy and Haojun Li and Amelia Hardy and Peng Qi and Kaushik Ram Sadagopan and Nguyet Minh Phu and Dilara Soylu and Christopher D. Manning},
We present Chirpy Cardinal, an open-domain dialogue agent, as a research platform for the 2019 Alexa Prize competition. Building an open-domain socialbot that talks to real people is challenging - such a system must meet multiple user expectations such as broad world knowledge, conversational style, and emotional connection. Our socialbot engages users on their terms - prioritizing their interests, feelings and autonomy. As a result, our socialbot provides a responsive, personalized user… 

Alquist 4.0: Towards Social Intelligence Using Generative Models and Dialogue Personalization

The principles and inner workings of individual components of the open-domain dialogue system Alquist developed within the Alexa Prize Socialbot Grand Challenge 4 are presented and the experiments conducted to evaluate them are presented.

Modeling Performance in Open-Domain Dialogue with PARADISE

A PARADISE model is developed for predicting the performance of Athena, a dialogue system that has participated in thousands of conversations with real users, while competing as a finalist in the Alexa Prize.

Neural, Neural Everywhere: Controlled Generation Meets Scaffolded, Structured Dialogue∗

This paper presents the second iteration of Chirpy Cardinal, an open-domain dialogue agent developed for the Alexa Prize SGC4 competition, and introduces a variety of methods for controllable neural generation, ranging from prefix-based neural decoding over a symbolic scaffolding to pure neural modules.

Further Advances in Open Domain Dialog Systems in the Third Alexa Prize Socialbot Grand Challenge

This paper outlines the advances developed by the university teams as well as the Alexa Prize team to move closer to the Grand Challenge objective, addressing several key open-ended problems such as conversational speech recognition, open domain natural language understanding, commonsense reasoning, statistical dialog management and dialog evaluation.

Proto: A Neural Cocktail for Generating Appealing Conversations

This paper dissect and analyze the different components and conversation strategies implemented by the socialbot, which enables it to generate colloquial, empathetic, engaging, self-rectifying, factually correct, and on-topic response, which has helped it achieve consistent scores throughout the competition.

Multimodal Conversational AI: A Survey of Datasets and Approaches

This paper motivates, defines, and mathematically formulates the multimodal conversational research objective, and provides a taxonomy of research required to solve the objective: multi-modality representation, fusion, alignment, translation, and co-learning.

Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue

These are the first results demonstrating that few-shot semantic prompt-based learning can create NLGs that generalize to new domains, and produce high-quality, semantically-controlled, conversational responses directly from meaning representations.

CASPR: A Commonsense Reasoning-based Conversational Socialbot

This work reports on the design and development of the CASPR system, a socialbot designed to compete in the Amazon Alexa Socialbot Challenge 4.0, and presents the philosophy behind CASPR’s design as well as details of its implementation.

Athena 2.0: Contextualized Dialogue Management for an Alexa Prize SocialBot

Athena 2.0 is an Alexa Prize SocialBot that has been a finalist in the last two Alexa Prize Grand Challenges and its novel dialogue management strategy allows it to dynamically construct dialogues and responses from component modules, leading to novel conversations with every interaction.

Guiding the Release of Safer E2E Conversational AI through Value Sensitive Design

A survey of end-to-end neural conversational agents work to highlight tensions between values, potential positive impact, and potential harms and provides a framework to support practitioners in deciding whether and how to release these models, following the tenets of value-sensitive design.



On Evaluating and Comparing Open Domain Dialog Systems

This paper proposes a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement, and believes that this work is a step towards an automatic evaluation process for conversational AIs.

Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset

This work proposes a new benchmark for empathetic dialogue generation and EmpatheticDialogues, a novel dataset of 25k conversations grounded in emotional situations, and presents empirical comparisons of dialogue model adaptations forEmpathetic responding, leveraging existing models or datasets without requiring lengthy re-training of the full model.

Advancing the State of the Art in Open Domain Dialog Systems through the Alexa Prize

Several key open-ended problems such as conversational speech recognition, open domain natural language understanding, commonsense reasoning, statistical dialog management, and dialog evaluation are addressed.

What makes a good conversation? How controllable attributes affect human judgments

This work examines two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chit-chat dialogue: repetition, specificity, response-relatedness and question-asking, and shows that by controlling combinations of these variables their models obtain clear improvements in human quality judgments.

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study

This paper takes an empirical approach to understanding how neural generative models use the available dialog history by studying the sensitivity of the models to artificially introduced unnatural changes or perturbations to their context at test time.

MIDAS: A Dialog Act Annotation Scheme for Open Domain HumanMachine Spoken Conversations

A dialog act annotation scheme, MIDAS (Machine Interaction Dialog Act Scheme), targeted at open-domain human-machine conversations, designed to assist machines to improve their ability to understand human partners.

TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents

A new approach to generative data-driven dialogue systems (e.g. chatbots) called TransferTransfo is introduced which is a combination of a Transfer learning based training scheme and a high-capacity Transformer model which shows strong improvements over the current state-of-the-art end-to-end conversational models.

Beyond User Self-Reported Likert Scale Ratings: A Comparison Model for Automatic Dialog Evaluation

This work proposes an automatic evaluation model CMADE (Comparison Model for Automatic Dialog Evaluation) that automatically cleans self-reported user ratings as it trains on them and first uses a self-supervised method to learn better dialog feature representation, and uses KNN and Shapley to remove confusing samples.

Dialogue act modeling for automatic tagging and recognition of conversational speech

A probabilistic integration of speech recognition with dialogue modeling is developed, to improve both speech recognition and dialogue act classification accuracy.

#MeToo Alexa: How Conversational Systems Respond to Sexual Harassment

This article establishes how current state-of-the-art conversational systems react to inappropriate requests, such as bullying and sexual harassment on the part of the user, by collecting and analysing the novel #MeTooAlexa corpus.