Multi-level Contrast Network for Wearables-based Joint Activity Segmentation and Recognition

  title={Multi-level Contrast Network for Wearables-based Joint Activity Segmentation and Recognition},
  author={Songpengcheng Xia and Lei Chu and Ling Pei and Wenxi Yu and Robert C. Qiu},
Human activity recognition (HAR) with wearables is promising research that can be widely adopted in many smart healthcare applications. In recent years, the deep learning-based HAR models have achieved impressive recognition performance. However, most HAR algorithms are susceptible to the multiclass windows problem that is essential yet rarely exploited. In this paper, we propose to relieve this challenging problem by introducing the segmentation technology into HAR, yielding joint activity… 

Figures and Tables from this paper



MARS: Mixed Virtual and Real Wearable Sensors for Human Activity Recognition With Multidomain Deep Learning Model

A large data set based on virtual IMUs is built and technical issues are addressed by introducing a multiple-domain deep learning framework consisting of three technical parts and the experimental results show that the proposed methods can surprisingly converge within a few iterations and outperform all competing methods.

Deep Triplet Networks with Attention for Sensor-based Human Activity Recognition

This study applies deep triplet networks with various triplet loss functions and mining methods to the Human Activity Recognition task and introduces a novel method for constructing hard triplets by exploiting similarities between subjects performing the same activities using the concept of Hierarchical Triplet Loss.

AttnSense: Multi-level Attention Mechanism For Multimodal Human Activity Recognition

AttnSense introduces the framework of combining attention mechanism with a convolutional neural network and a Gated Recurrent Units network to capture the dependencies of sensing signals in both spatial and temporal domains, which shows advantages in prioritized sensor selection and improves the comprehensibility.

Attend And Discriminate: Beyond the State-of-the-Art for Human Activity Recognition using Wearable Sensors

This work rigorously explores new opportunities to learn enriched and highly discriminating activity representations by learning to exploit the latent relationships between multi-channel sensor modalities and specific activities and incorporates a classification loss criterion to encourage minimal intra-class representation differences whilst maximising inter-class differences to achieve more discriminating features.

MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation

A multi-stage architecture for the temporal action segmentation task that achieves state-of-the-art results on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.

Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

A generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which is suitable for multimodal wearable sensors, does not require expert knowledge in designing features, and explicitly models the temporal dynamics of feature activations is proposed.

Learning Disentangled Representation for Mixed- Reality Human Activity Recognition With a Single IMU Sensor

A novel deep learning method to achieve accurate and robust HAR with only a single inertial measurement unit (IMU) sensor and a multiple-level domain adaptive learning model with information-theoretically stimulated constraints to simultaneously align the distributions of low- and high-level representations of virtual and real HAR data.

Open Set Mixed-Reality Human Activity Recognition

This work introduces the problem of Open Set Mixed-Reality HAR, which aims to recognize unseen activities while classify seen samples, and proposes a novel balanced open set backpropagation method to realize accurate and robust OSM-HAR.

Ensembles of Deep LSTM Learners for Activity Recognition using Wearables

  • Yu GuanT. Plötz
  • Computer Science
    Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
  • 2017
It is demonstrated that Ensembles of deep L STM learners outperform individual LSTM networks and thus push the state-of-the-art in human activity recognition using wearables.