Viewpoint Invariant Action Recognition using RGB-D Videos

Abstract

In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatiotemporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that are translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary. To establish the effectiveness of our approach, we benchmark it on three standard datasets and compare its performance with twelve existing methods. Experiments show that the proposed approach achieves up to 7.7% improvement in the accuracy over its nearest competitor. ̊Corresponding author Email address: naveed.akhtar@research.uwa.edu.au (Naveed Akhtar) Preprint submitted to Pattern Recognition September 18, 2017 ar X iv :1 70 9. 05 08 7v 1 [ cs .C V ] 1 5 Se p 20 17

13 Figures and Tables

Cite this paper

@article{Liu2017ViewpointIA, title={Viewpoint Invariant Action Recognition using RGB-D Videos}, author={Jian Liu and Naveed Akhtar and Ajmal S. Mian}, journal={CoRR}, year={2017}, volume={abs/1709.05087} }