A Dataset of Human Manipulation Actions

Abstract

We present a data set of human activities that includes both visual data (RGB-D video and six Degrees Of Freedom (DOF) object pose estimation) and acoustic data. Our vision is that robots need to merge information from multiple perceptional modalities to operate robustly and autonomously in an unstructured environment. 

Topics

3 Figures and Tables

Slides referencing similar topics