Self-supervised Learning for Human Activity Recognition Using 700, 000 Person-days of Wearable Data

  title={Self-supervised Learning for Human Activity Recognition Using 700, 000 Person-days of Wearable Data},
  author={Han Yuan and S. Chan and Andrew P. Creagh and C. Tong and David A. Clifton and Aiden Doherty},
Advances in deep learning for human activity recognition have been relatively limited due to the lack of large labelled datasets. In this study, we leverage self-supervised learning techniques on the UK-Biobank activity tracker dataset–the largest of its kind to date–containing more than 700,000 person-days of unlabelled wearable sensor data. Our resulting activity recognition model consistently outperformed strong baselines across seven benchmark datasets, with an F1 relative improvement of 2… 


SelfHAR: Improving Human Activity Recognition through Self-training with Unlabeled Data
SelfHAR is a semi-supervised model that effectively learns to leverage unlabeled mobile sensing datasets to complement small labeled datasets and achieves state-of-the-art performance in a diverse set of HAR datasets, which sheds light on how pre-training tasks may affect downstream performance.
Self-supervised Wearable-based Activity Recognition by Learning to Forecast Motion
The results show that the self- supervised approach outperforms the existing supervised and self-supervised methods to set new state-of-the-art values.
Multi-task Self-Supervised Learning for Human Activity Detection
A novel self-supervised technique for feature learning from sensory data that does not require access to any form of semantic labels, i.e., activity classes is proposed, which achieves performance levels superior to or comparable with fully-super supervised networks trained directly with activity labels, and it performs significantly better than unsupervised learning through autoencoders.
Masked reconstruction based self-supervision for human activity recognition
Masked reconstruction is introduced as a viable self-supervised pre-training objective for human activity recognition and its effectiveness in comparison to state-of-the-art unsupervised learning techniques is explored.
Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables
This paper rigorously explore deep, convolutional, and recurrent approaches across three representative datasets that contain movement data captured with wearable sensors, and describes how to train recurrent approaches in this setting, introduces a novel regularisation approach, and illustrates how they outperform the state-of-the-art on a large benchmark dataset.
Exploring Contrastive Learning in Human Activity Recognition for Healthcare
Preliminary results indicated an improvement over supervised and unsupervised learning methods when using fine-tuning and random rotation for augmentation, however, future work should explore under which conditions SimCLR is beneficial for HAR systems and other healthcare-related applications.
Contrastive Predictive Coding for Human Activity Recognition
This work introduces the Contrastive Predictive Coding (CPC) framework to human activity recognition, which captures the temporal structure of sensor data streams and leads to significantly improved recognition performance when only small amounts of labeled training data are available.
Interpretable deep learning for the remote characterisation of ambulation in multiple sclerosis using smartphones
Ensuing work aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP), suggesting that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants.