Geppetto: Enabling Semantic Design of Expressive Robot Behaviors

@article{Desai2019GeppettoES,
  title={Geppetto: Enabling Semantic Design of Expressive Robot Behaviors},
  author={Ruta Desai and Fraser Anderson and Justin Matejka and Stelian Coros and James McCann and George W. Fitzmaurice and Tovi Grossman},
  journal={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
  year={2019}
}
Expressive robots are useful in many contexts, from industrial to entertainment applications. [] Key Result A user-study finds the system to be useful for more quickly developing desirable robot behaviors, compared to manual parameter editing.
Teaching Robots to Span the Space of Functional Expressive Motion
TLDR
A method that interactively learns to map trajectories to a latent space of Valence-Arousal-Dominance (VAD) and can respond emotively to user-generated natural language by mapping it to a target VAD.
What is it like to be a bot? Variable perspective embodied telepresence for crowdsourcing robot movements
TLDR
This work proposes an embodied telepresence system for remotely crowdsourcing emotive robot movement samples that can serve as ML training data and finds that users strongly preferred the third-person perspective and that the ML-generated movements are largely comparable to the user-crafted movements.
Designing Deep Reinforcement Learning for Human Parameter Exploration
TLDR
This article proposes to investigate artificial agents using deep reinforcement learning to explore parameter spaces in partnership with users for sound design, and describes a series of user-centred studies to probe the creative benefits of these agents and adapting their design to exploration.
MoveAE: Modifying Affective Robot Movements Using Classifying Variational Autoencoders
TLDR
It is shown that editing in the latent space can modify the emotive quality of the movements while preserving recognizability and legibility in many cases, which supports neural networks as viable tools for creating and modifying expressive robot behaviors.
Expressivity in Interaction: a Framework for Design
TLDR
An integrated framework on how to design for expressivity in interaction is contributed including design considerations such as freedom of interaction, action-perception loops, multimodality, subtlety, ambiguity, skill development and temporal form.
Romeo: A Design Tool for Embedding Transformable Parts in 3D Models to Robotically Augment Default Functionalities
TLDR
Romeo is a design tool for creating transformables to robotically augment objects' default functionalities and allows users to transform an object into a robotic arm by expressing at a high level what type of task is expected.
Design Adjectives: A Framework for Interactive Model-Guided Exploration of Parameterized Design Spaces
TLDR
This paper provides a domain-agnostic implementation of the design adjectives framework based on Gaussian process regression, which is able to rapidly learn user intent from only a few examples, making the system suitable for iterative design workflows.
Scene-Aware Behavior Synthesis for Virtual Pets in Mixed Reality
TLDR
This work proposes a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality.
TaleBrush: Sketching Stories with Generative Pretrained Language Models
TLDR
TaleBrush is introduced, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories and a reflection on how Sketching interactions can facilitate the iterative human-AI co-creation process.
Sequential gallery for interactive visual design optimization
TLDR
Results suggest that novices can effectively complete search tasks with Sequential Gallery in a photo-enhancement scenario and the experiment with synthetic functions shows that the sequential plane search can find satisfactory solutions in fewer iterations than baselines.
...
1
2
3
...

References

SHOWING 1-10 OF 81 REFERENCES
Expressive Robot Motion Timing
TLDR
A strong correlation between the models and real user data are found, suggesting that robots can leverage these models to autonomously optimize the timing of their motion to be expressive.
Laban head-motions convey robot state: A call for robot body language
TLDR
The Laban Efforts, a system from dance and acting training in use for over 50 years, is adapted and it is found that robot motion patterns can convey complex expressions to people.
Expressing thought: Improving robot readability with animation principles
TLDR
Support is found for the hypothesis that perceptions of robots are influenced by robots showing forethought, the task outcome (success or failure), and showing goal-oriented reactions to those task outcomes.
Exploring the affect of abstract motion in social human-robot interaction
TLDR
The design approach, the creation of an abstract robotic motion platform that is nearly formless and affordance-less, and the evaluation of the affect abstract motion had on more than thirty participants which interacted with the authors' robotic platform in a series of studies are discussed.
Anthropomorphism of Artificial Agents: A Comparative Survey of Expressive Design and Motion of Virtual Characters and Social Robots
TLDR
This paper presents a comparative survey of design choices and motion generation techniques used in the computer animation community and in the robotics community when creating social agents and addresses the central question of anthropomorphism of artificial agents.
Deep Reinforcement Learning from Human Preferences
TLDR
This work explores goals defined in terms of (non-expert) human preferences between pairs of trajectory segments in order to effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion.
Steering Behaviors For Autonomous Characters
TLDR
This paper presents solutions for one requirement of autonomous characters in animation and games: the ability to navigate around their world in a life-like and improvisational manner by dividing motion behavior into three levels.
Communication of Intent in Assistive Free Flyers
TLDR
A formalism for representing AFF flight paths as a series of motion primitives is proposed and two studies examining the effects of modifying the trajectories and velocities of these flight primitives based on natural motion principles found that modified flight motions might allow AFFs to more effectively communicate intent.
Robot learning through social media crowdsourcing
  • Victor Emeli
  • Computer Science
    2012 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2012
TLDR
This paper investigates the use of social media crowdsourcing to allow a robot to access the vast information gathering resources available on Twitter.
Mirror Puppeteering: Animating Toy Robots in Front of a Webcam
TLDR
In a user study, participants found the Mirror Puppeteering method more enjoyable, usable, easy to learn, and successful than traditional animation methods.
...
1
2
3
4
5
...