Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer

  title={Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer},
  author={Suneel Belkhale and Ethan K. Gordon and Yuxiao Chen and Siddhartha S. Srinivasa and Tapomayukh Bhattacharjee and Dorsa Sadigh},
  journal={2022 International Conference on Robotics and Automation (ICRA)},
Robot-assisted feeding in household environments is challenging because it requires robots to generate trajectories that effectively bring food items of varying shapes and sizes into the mouth while making sure the user is comfortable. Our key insight is that in order to solve this challenge, robots must balance the efficiency of feeding a food item with the comfort of each individual bite. We formalize comfort and efficiency as heuristics to incorporate in motion planning. We present an… 

In-Mouth Robotic Bite Transfer with Visual and Haptic Sensing

An essential capability of this platform is demonstrated: safe, comfortable, and effective transfer of a bite-sized food item from a utensil directly to the inside of a person’s mouth.

Learning Visuo-Haptic Skewering Strategies for Robot-Assisted Feeding

This work proposes a zero-shot framework to sense visuo-haptic properties of a previously unseen item and reactively skewer it, all within a single interaction, and demonstrates that the multimodal policy outperforms baselines which do not exploit both visual and haptic cues or do not reactively plan.

Human-Robot Commensality: Bite Timing Prediction for Robot-Assisted Feeding in Groups

A data-driven models to predict when a robot should feed during social dining scenarios and shows that bite timing strategies that take into account the delicate balance of social cues can lead to seamless interactions during robot-assisted feeding in a social dining scenario.



Transfer Depends on Acquisition: Analyzing Manipulation Strategies for Robotic Feeding

The results show that an intelligent food item dependent skewering strategy improves the bite acquisition success rate and that the choice of skewering location and the fork orientation affects the ease of bite transfer sianificantly.

Learning User-Preferred Mappings for Intuitive Robot Control

The simulated and experimental results suggest that learning the mapping between inputs and robot actions improves objective and subjective performance when compared to manually defined alignments or learned alignments without intuitive priors.

Shared Autonomy with Learned Latent Actions

This work adopts learned latent actions for shared autonomy by proposing a new model structure that changes the meaning of the human's input based on the robot's confidence of the goal, and develops a training procedure to learn a controller that is able to move between goals even in the presence of shared autonomy.

Controlling Assistive Robots with Learned Latent Actions

A teleoperation algorithm for assistive robots that learns latent actions from task demonstrations is designed, and the controllability, consistency, and scaling properties that user-friendly latent actions should have are formulated, and how different lowdimensional embeddings capture these properties are evaluated.

Adaptive Robot-Assisted Feeding: An Online Learning Framework for Acquiring Previously Unseen Food Items

This work demonstrates empirically on a robot- assisted feeding system that, even starting with a model trained on thousands of skewering attempts on dissimilar previously seen food items, e-greedy and LinUCB algorithms can quickly converge to the most successful manipulation strategy.

Robot-Assisted Feeding: Generalizing Skewering Strategies across Food Items on a Realistic Plate

A bite acquisition framework that takes the image of a full plate as an input, uses RetinaNet to create bounding boxes around food items in the image, and applies the skewering-position-action network (SPANet) to choose a target food item and a corresponding action so that the bite acquisition success rate is maximized.

Enabling Robot Teammates to Learn Latent States of Human Collaborators

This challenge for modeling humans for designing robot collaborators is described, algorithmic solutions towards this problem are summarized and a description of a human-robot collaboration scenario designed to evaluate the approach is described.

Towards Robotic Feeding: Role of Haptics in Fork-Based Food Manipulation

A set of classifiers for compliance-based food categorization from haptic and motion signals is proposed and compared with fixed position-control policies via a robot to highlight the importance of adapting the policy to the compliance of a food item.

Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks

This work uses self-supervision to learn a compact and multimodal representation of sensory inputs, which can then be used to improve the sample efficiency of the policy learning of deep reinforcement learning algorithms.

Autonomy infused teleoperation with application to brain computer interface controlled manipulation

The results indicate that shared assistance mitigates perceived user difficulty in using a seven-degree of freedom robotic arm as a prosthetic and enables successful performance on previously infeasible tasks.