Leveraging Activity Recognition to Enable Protective Behavior Detection in Continuous Data

  title={Leveraging Activity Recognition to Enable Protective Behavior Detection in Continuous Data},
  author={Chongyang Wang and Yuan Gao and Akhil Mathur and Amanda C.C. Williams and Nicholas D. Lane and Nadia Bianchi-Berthouze},
  journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
  pages={1 - 27}
Protective behavior exhibited by people with chronic pain (CP) during physical activities is very informative to understanding their physical and emotional states. Existing automatic protective behavior detection (PBD) methods rely on pre-segmentation of activities predefined by users. However, in real life, people perform activities casually. Therefore, where those activities present difficulties for people with CP, technology-enabled support should be delivered continuously and automatically… 
AgreementLearning: An End-to-End Framework for Learning with Multiple Annotators without Groundtruth
A novel agreement learning framework to tackle the challenge of learning from multiple annotators without objective groundtruth is proposed and experiments on two medical datasets demonstrate improved agreement levels with annotators.
Bridging the gap between emotion and joint action
Gravity Control-Based Data Augmentation Technique for Improving VR User Activity Recognition
A data augmentation technique named gravity control-based augmentation (GCDA) to alleviate the sparse data problem by generating new training data based on the existing data by exploiting gravity as a directional feature and controlling it to augment training datasets.
The AffectMove 2021 Challenge - Affect Recognition from Naturalistic Movement Data
The first Affective Movement Recognition challenge that brings together datasets of affective bodily behaviour across different real-life applications to foster work in this area and challenged participants to take advantage of the data across datasets to improve performances and also test the generalization of their approach across different applications.


Chronic Pain Protective Behavior Detection with Deep Learning
This article investigates the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain.
Recurrent network based automatic detection of chronic pain protective behavior using MoCap and sEMG data
This paper investigates automatic detection of protective behavior (movement behavior due to pain-related fear or pain) based on wearable motion capture and electromyography sensor data and investigates two recurrent networks referred to as stacked-LSTM and dual-stream LSTM, which are compared with related deep learning (DL) architectures.
Learning Bodily and Temporal Attention in Protective Movement Behavior Detection
This work investigates how attention-based DL architectures can be used to improve the detection of protective behavior by capturing the most informative temporal and body configurational cues characterizing specific movements and the strategies used to perform them using the EmoPain MoCap dataset.
Ensembles of Deep LSTM Learners for Activity Recognition using Wearables
  • Yu Guan, T. Plötz
  • Computer Science
    Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
  • 2017
It is demonstrated that Ensembles of deep L STM learners outperform individual LSTM networks and thus push the state-of-the-art in human activity recognition using wearables.
Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables
This paper rigorously explore deep, convolutional, and recurrent approaches across three representative datasets that contain movement data captured with wearable sensors, and describes how to train recurrent approaches in this setting, introduces a novel regularisation approach, and illustrates how they outperform the state-of-the-art on a large benchmark dataset.
On attention models for human activity recognition
This paper introduces attention models into HAR research as a data driven approach for exploring relevant temporal context and constructs attention models for HAR by adding attention layers to a state-of-the-art deep learning HAR model (DeepConvLSTM).
Understanding and improving recurrent networks for human activity recognition by continuous attention
Qualitative analysis shows that the attention learned by the models agree well with human intuition, and these two mechanisms adaptively focus on important signals and sensor modalities are proposed.
Handling annotation uncertainty in human activity recognition
This work presents a scheme that explicitly incorporates label jitter into the model training process and demonstrates the effectiveness of the proposed method through a systematic experimental evaluation on standard recognition tasks for which the method leads to significant increases of mean F1 scores.
Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
This paper introduces new gating mechanism within LSTM to learn the reliability of the sequential input data and accordingly adjust its effect on updating the long-term context information stored in the memory cell, and proposes a more powerful tree-structure based traversal method.
An Attention Enhanced Graph Convolutional LSTM Network for Skeleton-Based Action Recognition
A novel Attention Enhanced Graph Convolutional LSTM Network (AGC-LSTM) for human action recognition from skeleton data can not only capture discriminative features in spatial configuration and temporal dynamics but also explore the co-occurrence relationship between spatial and temporal domains.