MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze

@article{Kratzer2021MoGazeAD,
  title={MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze},
  author={Philipp Kratzer and Simon Bihlmaier and Niteesh Balachandra Midlagajni and Rohit Prakash and Marc Toussaint and Jim Mainprice},
  journal={IEEE Robotics and Automation Letters},
  year={2021},
  volume={6},
  pages={367-373}
}
As robots become more present in open human environments, it will become crucial for robotic systems to understand and predict human motion. Such capabilities depend heavily on the quality and availability of motion capture data. However, existing datasets of full-body motion rarely include 1) long sequences of manipulation tasks, 2) the 3D model of the workspace geometry, and 3) eye-gaze, which are all important when a robot needs to predict the movements of humans in close proximity. Hence… 
Ensemble of LSTMs and feature selection for human action prediction
TLDR
An ensemble of long-short term memory (LSTM) networks are proposed to use for human action prediction and it is suggested that the LSTM model slightly outperforms the gaze baseline in single object picking accuracy, but achieves better accuracy in macro object prediction.
FACT: A Full-body Ad-hoc Collaboration Testbed for Modeling Complex Teamwork
TLDR
The goal is for FACT to be an initial resource that supports a more holistic investigation of human-robot collaboration, an openly accessible testbed for researchers to obtain an expansive view of the individual and team-based behaviors involved in complex, colocated teamwork.
Motion Planning in Dynamic Environments Using Context-Aware Human Trajectory Prediction
TLDR
It is demonstrated that GPU-accelerated predicted composite distance fields significantly reduce the computation time compared to calculating distance fields from scratch and is integrated with a complete motion planning and perception framework that accounts for the predicted motion of humans in dynamic environments.
Guest Editorial: Introduction to the Special Issue on Long-Term Human Motion Prediction
TLDR
The recent trend shows that the community aims to develop end-to-end approaches that make use of prediction as the gluing unit between perception and planning/control units, as well as novel deep learning architectures allow better interleaving of the aforementioned units.
Hierarchical Human-Motion Prediction and Logic-Geometric Programming for Minimal Interference Human-Robot Tasks
TLDR
This paper devise a hierarchical motion prediction approach by combining Inverse Reinforcement Learning and short-term motion prediction using a Recurrent Neural Network and proposes a dynamic version of the TAMP algorithm LogicGeometric Programming (LGP).

References

SHOWING 1-10 OF 27 REFERENCES
MoVi: A large multi-purpose human motion and video dataset
TLDR
This multimodal dataset contains 9 hours of optical motion capture data, 17 hours of video data from 4 different points of view recorded by stationary and hand-held cameras, and 6.6 hours of inertial measurement units data recorded from 60 female and 30 male actors performing a collection of 21 everyday actions and sports movements.
Anticipating Human Intention for Full-Body Motion Prediction in Object Grasping and Placing Tasks
TLDR
This work proposes dedicated function networks for graspability and placebility affordances and makes use of a dedicated RNN for short-term motion prediction and shows by comparing to ground truth data that it achieves similar performance for full-body motion predictions as using oracle grasp and place locations.
The KIT whole-body human motion database
We present a large-scale whole-body human motion database consisting of captured raw motion data as well as the corresponding post-processed motions. This database serves as a key element for a wide
Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion
TLDR
A large-scale database of whole-body human motion with methods and tools which allows a unifying representation of captured human motion, and efficient search in the database, as well as the transfer of subject-specific motions to robots with different embodiments is presented.
AMASS: Archive of Motion Capture As Surface Shapes
TLDR
AMASS is introduced, a large and varied database of human motion that unifies 15 different optical marker-based mocap datasets by representing them within a common framework and parameterization and makes it readily useful for animation, visualization, and generating training data for deep learning.
Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training
Action Anticipation: Reading the Intentions of Humans and Robots
TLDR
The results show that it is possible to model (nonverbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of being able to “read” human action intentions and acting in a way that is legible by humans.
Recovering Accurate 3D Human Pose in the Wild Using IMUs and a Moving Camera
TLDR
This work proposes a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild and obtains an accuracy of 26 mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in theWild.
JRDB: A Dataset and Benchmark for Visual Perception for Navigation in Human Environments
We present JRDB, a novel dataset collected from our social mobile manipulator JackRabbot. The dataset includes 64 minutes of multimodal sensor data including stereo cylindrical 360$^\circ$ RGB video
Anticipating Human Activities Using Object Affordances for Reactive Robotic Response
TLDR
This work represents each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances and represents each ATCRF as a particle and represents the distribution over the potential futures using a set of particles.
...
1
2
3
...