• Corpus ID: 221103959

Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network.

  title={Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network.},
  author={Hee-Seung Moon and Jiwon Seo},
  journal={arXiv: Robotics},
Training a robot that engages with people is challenging, because it is expensive to involve people in a robot training process requiring numerous data samples. This paper proposes a human path prediction network (HPPN) and an evolution strategy-based robot training method using virtual human movements generated by the HPPN, which compensates for this sample inefficiency problem. We applied the proposed method to the training of a robotic guide for visually impaired people, which was designed… 
Fast User Adaptation for Human Motion Prediction in Physical Human–Robot Interaction
A model structure and a meta-learning algorithm specialized to enable fast user adaptation in predicting human movements in cooperative situations with robots are proposed and outperforms existing meta- learning and non-meta-learning baselines in predicting the movements of unseen users.
Optimal Action-based or User Prediction-based Haptic Guidance: Can You Do Even Better?
A combined HG (CombHG) is proposed that achieves optimal performance by complementing each HG type by reducing the disagreement between the user intention and HG, without reducing the objective and subjective scores.
Machine Learning-Based GPS Multipath Detection Method Using Dual Antennas
A machine learning model that could classify GPS signal reception conditions was trained with several GPS measurements selected as suggested features, and it was found that a classification accuracy of 82%–96% was achieved when the test data set was collected at the same locations as those of the training data set.


Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks
A deep learning method based on recurrent neural networks is applied for predicting the trajectory of a human who follows a haptic robotic guide without using sight, which is valuable for assistive robots that aid the visually impaired.
A sensorimotor reinforcement learning framework for physical Human-Robot Interaction
A data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner in an unsupervised manner and optimal action selection under uncertainty and equal role sharing between the partners is presented.
Observation of Human Response to a Robotic Guide Using a Variational Autoencoder
This paper proposes a robotic-guide system equipped with a haptic device that can deliver kinesthetic feedback to and receive kinesthetic reaction from a follower. In addition, a feature-extraction
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
It is demonstrated that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.
Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation
It is shown that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations.
Robot gains social intelligence through multimodal deep reinforcement learning
A Multimodal Deep Q-Network is proposed to enable a robot to learn human-like interaction skills through a trial and error method, and a robot was able to learn basic interaction skills successfully, after 14 days of interacting with people.
Sim-to-Real: Learning Agile Locomotion For Quadruped Robots
This system can learn quadruped locomotion from scratch using simple reward signals and users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.
Domain randomization for transferring deep neural networks from simulation to the real world
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.