Neural Kinematic Networks for Unsupervised Motion Retargetting

@article{Villegas2018NeuralKN,
  title={Neural Kinematic Networks for Unsupervised Motion Retargetting},
  author={Ruben Villegas and Jimei Yang and Duygu Ceylan and Honglak Lee},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={8639-8648}
}
We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting. [...] Key Method Instead, our network utilizes cycle consistency to learn to solve the Inverse Kinematics problem in an unsupervised manner. Our method works online, i.e., it adapts the motion sequence on-the-fly as new frames are received. In our experiments, we use the Mixamo animation data1 to test our method for a variety of…Expand
A variational U‐Net for motion retargeting
TLDR
A novel human motion retargeting system using a deep learning framework with large‐scale motion data to produce high‐quality retargeted human motion is established using a variational deep autoencoder combining the deep convolutional inverse graphics network (DC‐IGN) and the U‐Net.
Learning character-agnostic motion for motion retargeting in 2D
TLDR
This paper presents a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters, and demonstrates that this framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeted methods, when applied to videos in-the-wild.
A Causal Convolutional Neural Network for Motion Modeling and Synthesis
TLDR
Experimental results show that the quality of motions generated by the network is superior to the motions of state-of-the-art human motion synthesis methods, and it runs fast to synthesize different types of motions on-line.
Few‐shot Learning of Homogeneous Human Locomotion Styles
TLDR
This paper proposes a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained, and introduces a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style.
Human-Robot Motion Retargeting via Neural Latent Optimization
TLDR
A graph-based neural network is utilized to establish a mapping between the latent space and the robot motion space, and this method can retarget motion from human to robot with both efficiency and accuracy.
PMnet: Learning of Disentangled Pose and Movement for Unsupervised Motion Retargeting
TLDR
This paper develops a novel architecture referred to as the pose-movement network (PMnet), which separately learns frame-by-frame poses and overall movement and introduces a novel loss function that allows PMnet to properly retarget the poses and Overall movement.
Kinematic Motion Retargeting via Neural Latent Optimization for Learning Sign Language
TLDR
This paper proposes a novel neural latent optimization approach to address the challenges resulting from the differences between human and robot programming, and performs experiments on retargeting Chinese sign language.
Skeleton-aware networks for deep motion retargeting
TLDR
This work introduces a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs, and is the first to perform retargeted between skeletons with differently sampled kinematic chains, without any paired examples.
Self-Supervised Motion Retargeting with Safety Guarantee
TLDR
Self-supervised shared latent embedding (S3LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos, significantly alleviates the necessity of time-consuming data-collection via novel paired data generating processes.
MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks
TLDR
A novel framework that brings the 3D motion retargeting task from controlled environments to in-the-wild scenarios and could serve as a disentangled and interpretable representation of human motion that benefits action analysis and motion retrieval.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 52 REFERENCES
Learning character-agnostic motion for motion retargeting in 2D
TLDR
This paper presents a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters, and demonstrates that this framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeted methods, when applied to videos in-the-wild.
A deep learning framework for character motion synthesis and editing
TLDR
A framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset, can produce smooth, high quality motion sequences without any manual pre-processing of the training data.
Deep Representation Learning for Human Motion Prediction and Classification
TLDR
The results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.
On Human Motion Prediction Using Recurrent Neural Networks
TLDR
It is shown that, surprisingly, state of the art performance can be achieved by a simple baseline that does not attempt to model motion at all, and a simple and scalable RNN architecture is proposed that obtains state-of-the-art performance on human motion prediction.
Realtime style transfer for unlabeled heterogeneous human motion
TLDR
A novel solution for realtime generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles and introduces an efficient local regression model to predict the timings of synthesized poses in the output style.
Recurrent Network Models for Human Dynamics
TLDR
The Encoder-Recurrent-Decoder (ERD) model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers that extends previous Long Short Term Memory models in the literature to jointly learn representations and their dynamics.
Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis
TLDR
This work is the first to the knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.
On-line motion retargetting
  • Kwang-Jin Choi, Hyeongseok Ko
  • Computer Science
    Proceedings. Seventh Pacific Conference on Computer Graphics and Applications (Cat. No.PR00293)
  • 1999
TLDR
Experiments prove that the retargetting algorithm preserves the high frequency details of the original motion quite accurately, and can be used to reduce measurement errors in restoring captured motion.
Learning human behaviors from motion capture by adversarial imitation
TLDR
Generative adversarial imitation learning is extended to enable training of generic neural network policies to produce humanlike movement patterns from limited demonstrations consisting only of partially observed state features, without access to actions, even when the demonstrations come from a body with different and unknown physical parameters.
Time-Contrastive Networks: Self-Supervised Learning from Video
TLDR
A self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints is proposed, and it is demonstrated that this representation can be used by a robot to directly mimic human poses without an explicit correspondence, and that it can be use as a reward function within a reinforcement learning algorithm.
...
1
2
3
4
5
...