Weakly-supervised Learning of Human Dynamics

  title={Weakly-supervised Learning of Human Dynamics},
  author={Petrissa Zell and Bodo Rosenhahn and Bastian Wandt},
This paper proposes a weakly-supervised learning framework for dynamics estimation from human motion. Although there are many solutions to capture pure human motion readily available, their data is not sufficient to analyze quality and efficiency of movements. Instead, the forces and moments driving human motion (the dynamics) need to be considered. Since recording dynamics is a laborious task that requires expensive sensors and complex, time-consuming optimization, dynamics data sets are small… 
Neural monocular 3D human motion capture with physical awareness
A new trainable system for physically plausible markerless 3D human motion capture, which achieves state-of-the-art results in a broad range of challenging scenarios and is aware of physical and environmental constraints.
UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup
A novel motion capture database labelled with pressure insoles data serving as reliable knowledge of foot contact with the ground and a fully automatic method for footskate cleanup, which could help to improve many approaches based on foot contact labels or ground reaction forces.
Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture
Due to the visual ambiguity, purely kinematic formula-tions on monocular human motion capture are often physically incorrect, biomechanically implausible, and can not reconstruct accurate
Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors
Motion capture from sparse inertial sensors has shown great potential compared to image-based approaches since occlusions do not lead to a reduced tracking quality and the recording space is not


Learning inverse dynamics for human locomotion analysis
In this work, learning-based inverse dynamics algorithms are proposed for the analysis of human motion and a multistage subclass approach is introduced that recovers occluded motion data and generates meaningful features, as well as gait phase labels to restrict and facilitate the regression of forces and moments.
Dynamic motion learning for multi-DOF flexible-joint robots using active–passive motor babbling through deep learning
The objective of this strategy is to efficiently learn the desired movements to perform the given tasks with the reduction of training iterations and generalization to untrained situations with the learned body dynamics.
Data-driven inverse dynamics for human motion
A novel inverse dynamics method that accurately reconstructs biomechanically valid contact information, including center of pressure, contact forces, torsional torques and internal joint torques from input kinematic human motion data is presented.
Efficient Codes for Inverse Dynamics During Walking
The results of the investigation suggest that sparse codes can indeed represent salient features of both the kinematic and dynamic views of human locomotion movements, and it is argued that representations of movement are critical to modeling and understanding these movements.
Deep spatial autoencoders for visuomotor learning
This work presents an approach that automates state-space construction by learning a state representation directly from camera images by using a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects.
Physics-Based Models for Human Gait Analysis
It is shown how forward dynamics optimization can be used to determine the producing forces of gait patterns in a 2D physics-based statistical model for human gait analysis.
Temporal Cycle-Consistency Learning
It is shown that the learned embeddings enable few-shot classification of these action phases, significantly reducing the supervised training requirements; and TCC is complementary to other methods of self-supervised learning in videos, such as Shuffle and Learn and Time-Contrastive Networks.
Exploiting Temporal Context for 3D Human Pose Estimation in the Wild
A bundle-adjustment-based algorithm for recovering accurate 3D human pose and meshes from monocular videos and shows that retraining a single-frame 3D pose estimator on this data improves accuracy on both real-world and mocap data by evaluating on the 3DPW and HumanEVA datasets.
Learning Correspondence From the Cycle-Consistency of Time
A self-supervised method to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch and demonstrates the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow.
Recovering Accurate 3D Human Pose in the Wild Using IMUs and a Moving Camera
This work proposes a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild and obtains an accuracy of 26 mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in theWild.