• Corpus ID: 49216040

Interview with the Robot: Question-Guided Collaboration in a Storytelling System

  title={Interview with the Robot: Question-Guided Collaboration in a Storytelling System},
  author={Philipp Wicke and Tony Veale},
  booktitle={International Conference on Innovative Computing and Cloud Computing},
An automated storytelling system that presents its stories as text on a screen is limited in its engagement with readers. [] Key Method Our robotic writer/presenter has two alternate modes of story-generation: a straight “telling” mode and an interview-oriented back-and-forth that extracts personal experiences from the user as raw material for new stories. We explore the practical issues of implementing both modes on a NAO humanoid robot that integrates gestural capabilities in an existing story-telling…

Figures from this paper

Are You Not Entertained? Computational Storytelling with Non-Verbal Interaction

We describe the design and implementation of a multi-modal storytelling system. Multiple robots narrate and act out an AI-generated story whose plots can be dynamically altered via non-verbal

Wheels Within Wheels: A Causal Treatment of Image Schemas in An Embodied Storytelling System

A novel approach to grounding a computational storytelling system in an embodied robotic agent that is capable of making physical gestures is outlined, and it is argued for why image schemata make the ideal glue for linking the causal structures of plot generation to the gestures of bodily expressiveness.

The Role of Gestures and Movement in Computational, Embodied Storytelling

This research proposes the use of symbolic gestures to augment artificial, robotic storytelling in order to explicate the hidden conceptual structures of artificial storytelling.

Show, Don't (Just) Tell: Embodiment and Spatial Metaphor in Computational Story-Telling

This paper focuses here on the embodied realization of computergenerated stories with a mix of physical devices, specifically Amazon’s Echo/Alexa and the NAO anthropomorphic robot, and the interlocking role of spatial metaphor and pantomime in turning a narrative artifact into a coherent performance.

The Show Must Go On: On the Use of Embodiment, Space and Gesture in Computational Storytelling

It is shown that audiences are sensitive to the coherent use of space in embodied story-telling, and appreciate the schematic use of spatial movements as much as more culturally specific pantomime gestures.

Creative Action at a Distance: A Conceptual Framework for Embodied Performance With Robotic Actors

This theory and hypothesis article presents a framework for performance and interpretation within robotic storytelling, and hypothesises that emotionally-grounded choices can inform acts of metaphor and blending, to elevate a scripted performance into a creative one.

Socially Assistive Robots as Storytellers that Elicit Empathy

Empathy is the ability to share someone else’s feelings or experiences; it influences how people interact and relate. Socially assistive robots (SAR) are a promising means of conveying and eliciting

Modeling User Empathy Elicited by a Robot Storyteller

This paper presents the first approach to modeling user empathy elicited during interactions with a robotic agent, and contributes insights regarding modeling approaches and visual features for automated empathy detection.

Metaphor, Blending and Irony in Action: Creative Performance as Interpretation and Emotionally-Grounded Choice

Metaphor is a powerful tool in the performer’s tool box, not least because it can operate at several levels at once. As our linguistic metaphors deliver rhetorical flourishes, conceptual metaphors

Duets Ex Machina: On The Performative Aspects of "Double Acts" in Computational Creativity

This work considers the pairing of two CC systems in the same thematic area, a speechbased story-teller and an embodied storyteller (using a NAO robot), working together to compensate for each other’s weaknesses while creating something of comedic value that neither has on its own.

Expressive Gestures Displayed by a Humanoid Robot during a Storytelling Application

Our purpose is to have the humanoid robot, NAO, to read a story aloud through expressive verbal and nonverbal behaviors . These behaviors are linked to the story being told and the emotions to be

A Survey on Storytelling with Robots

This paper surveys some works on storytelling with robots and includes robots as learning companions/pets, robot programming, interaction design techniques, technology introduction and pedagogy (robots as learning materials), and robots as teaching assistance.

Multimodal conversational interaction with a humanoid robot

This work implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao, and greatly extended the robot's interaction capabilities by enabling Nao to talk about an unlimited range of topics.

A Ballad of the Mexicas: Automated Lyrical Narrative Writing

This paper introduces MABLE (MexicA’s BaLlad machinE), based on the plot generation system, MEXICA, which is the first computational system to write narrative-based lyrics.

Design and implementation of an expressive gesture model for a humanoid robot

This article presents the ongoing work on a gesture model generating co-verbal gestures for the robot while taking into account the limits of movement space and joint speed.

MEXICA: A computer model of a cognitive account of creative writing

The engagement-reflection account of writing, the general characteristics of MEXICA, and an evaluation of the program are described and reported.

Narrative Planning: Balancing Plot and Character

A novel refinement search planning algorithm - the Intent-based Partial Order Causal Link (IPOCL) planner - is described that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals.

Transformative Character Arcs For Use in Compelling Stories

The Flux Capacitor is presented, a generator of transformative character arcs that are both intuitive and dramatically interesting and can be computationally modeled as dynamic blends that unfold along a narrative trajectory.

Integration of gestures and speech in human-robot interaction

It is found that open arm gestures, head movements and gaze following could significantly enhance Nao's ability to be expressive and appear lively, and to engage human users in conversational interactions.

Grounding the Meaning of Words through Vision and Interactive Gameplay

I Spy is an effective approach for teaching robots how to model new concepts using representations comprised of visual attributes, and a model evaluation showed that the system correctly understood the visual representations of its learned concepts with an average of 65% accuracy.