Human intent forecasting using intrinsic kinematic constraints
The performance of human-robot collaboration tasks can be improved by incorporating predictions of the human collaborator's movement intentions. These predictions allow a collaborative robot to both provide appropriate assistance and plan its own motion so it does not interfere with the human. In the specific case of human reach intent prediction, prior work has divided the task into two pieces: recognition of human activities and prediction of reach intent. In this work, we propose a joint model for simultaneous recognition of human activities and prediction of reach intent based on skeletal pose. Since future reach intent is tightly linked to the action a person is performing at present, we hypothesize that this joint model will produce better performance on the recognition and prediction tasks than past approaches. In addition, our approach incorporates a simple human kinematic model which allows us to generate features that compactly capture the reachability of objects in the environment and the motion cost to reach those objects, which we anticipate will improve performance. Experiments using the CAD-120 benchmark dataset show that both the joint modeling approach and the human kinematic features give improved F1 scores versus the previous state of the art.