Leveraging Activity Recognition to Enable Protective Behavior Detection in Continuous Data

@article{Wang2021LeveragingAR,
  title={Leveraging Activity Recognition to Enable Protective Behavior Detection in Continuous Data},
  author={Chongyang Wang and Yuan Gao and Akhil Mathur and Amanda C.C. Williams and Nicholas D. Lane and Nadia Bianchi-Berthouze},
  journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
  year={2021},
  volume={5},
  pages={1 - 27}
}
Protective behavior exhibited by people with chronic pain (CP) during physical activities is very informative to understanding their physical and emotional states. Existing automatic protective behavior detection (PBD) methods rely on pre-segmentation of activities predefined by users. However, in real life, people perform activities casually. Therefore, where those activities present difficulties for people with CP, technology-enabled support should be delivered continuously and automatically… 
The AffectMove 2021 Challenge - Affect Recognition from Naturalistic Movement Data
TLDR
The first Affective Movement Recognition challenge that brings together datasets of affective bodily behaviour across different real-life applications to foster work in this area and challenged participants to take advantage of the data across datasets to improve performances and also test the generalization of their approach across different applications.
AgreementLearning: An End-to-End Framework for Learning with Multiple Annotators without Groundtruth
TLDR
A novel agreement learning framework to tackle the challenge of learning from multiple annotators without objective groundtruth is proposed and experiments on two medical datasets demonstrate improved agreement levels with annotators.
Learn2Agree: Fitting with Multiple Annotators without Objective Ground Truth
TLDR
A novel Learning to Agreement (Learn2Agree) framework to tackle the challenge of learning from multiple annotators without objective ground truth is proposed and can be easily added to existing backbones.
Bridging the gap between emotion and joint action
Gravity Control-Based Data Augmentation Technique for Improving VR User Activity Recognition
TLDR
A data augmentation technique named gravity control-based augmentation (GCDA) to alleviate the sparse data problem by generating new training data based on the existing data by exploiting gravity as a directional feature and controlling it to augment training datasets.

References

SHOWING 1-10 OF 116 REFERENCES
Chronic Pain Protective Behavior Detection with Deep Learning
TLDR
This article investigates the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain.
Recurrent network based automatic detection of chronic pain protective behavior using MoCap and sEMG data
TLDR
This paper investigates automatic detection of protective behavior (movement behavior due to pain-related fear or pain) based on wearable motion capture and electromyography sensor data and investigates two recurrent networks referred to as stacked-LSTM and dual-stream LSTM, which are compared with related deep learning (DL) architectures.
Learning Bodily and Temporal Attention in Protective Movement Behavior Detection
TLDR
This work investigates how attention-based DL architectures can be used to improve the detection of protective behavior by capturing the most informative temporal and body configurational cues characterizing specific movements and the strategies used to perform them using the EmoPain MoCap dataset.
Ensembles of Deep LSTM Learners for Activity Recognition using Wearables
  • Yu Guan, T. Plötz
  • Computer Science
    Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
  • 2017
TLDR
It is demonstrated that Ensembles of deep L STM learners outperform individual LSTM networks and thus push the state-of-the-art in human activity recognition using wearables.
Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables
TLDR
This paper rigorously explore deep, convolutional, and recurrent approaches across three representative datasets that contain movement data captured with wearable sensors, and describes how to train recurrent approaches in this setting, introduces a novel regularisation approach, and illustrates how they outperform the state-of-the-art on a large benchmark dataset.
On attention models for human activity recognition
TLDR
This paper introduces attention models into HAR research as a data driven approach for exploring relevant temporal context and constructs attention models for HAR by adding attention layers to a state-of-the-art deep learning HAR model (DeepConvLSTM).
Understanding and improving recurrent networks for human activity recognition by continuous attention
TLDR
Qualitative analysis shows that the attention learned by the models agree well with human intuition, and these two mechanisms adaptively focus on important signals and sensor modalities are proposed.
Handling annotation uncertainty in human activity recognition
TLDR
This work presents a scheme that explicitly incorporates label jitter into the model training process and demonstrates the effectiveness of the proposed method through a systematic experimental evaluation on standard recognition tasks for which the method leads to significant increases of mean F1 scores.
Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
TLDR
This paper introduces new gating mechanism within LSTM to learn the reliability of the sequential input data and accordingly adjust its effect on updating the long-term context information stored in the memory cell, and proposes a more powerful tree-structure based traversal method.
Learning Temporal and Bodily Attention in Protective Movement Behavior Detection
TLDR
This work investigates how attention-based DL architectures can be used to improve the detection of protective behavior by capturing the most informative temporal and body configurational cues characterizing specific movements and the strategies used to perform them using the EmoPain MoCap dataset.
...
...