Deep learning for human activity recognition: A resource efficient implementation on low-power devices

Abstract

Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.

DOI: 10.1109/BSN.2016.7516235

8 Figures and Tables

Cite this paper

@article{Rav2016DeepLF, title={Deep learning for human activity recognition: A resource efficient implementation on low-power devices}, author={Daniele Rav{\`i} and Charence Wong and Benny P. L. Lo and Guang-Zhong Yang}, journal={2016 IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN)}, year={2016}, pages={71-76} }