Neural Kinematic Networks for Unsupervised Motion Retargetting

@article{Villegas2018NeuralKN,
  title={Neural Kinematic Networks for Unsupervised Motion Retargetting},
  author={Ruben Villegas and Jimei Yang and Duygu Ceylan and Honglak Lee},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={8639-8648}
}
We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting. [...] Key Method Instead, our network utilizes cycle consistency to learn to solve the Inverse Kinematics problem in an unsupervised manner. Our method works online, i.e., it adapts the motion sequence on-the-fly as new frames are received. In our experiments, we use the Mixamo animation data1 to test our method for a variety of…Expand
A variational U‐Net for motion retargeting
TLDR
A novel human motion retargeting system using a deep learning framework with large‐scale motion data to produce high‐quality retargeted human motion is established using a variational deep autoencoder combining the deep convolutional inverse graphics network (DC‐IGN) and the U‐Net. Expand
Learning character-agnostic motion for motion retargeting in 2D
TLDR
This paper presents a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters, and demonstrates that this framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeted methods, when applied to videos in-the-wild. Expand
A Causal Convolutional Neural Network for Motion Modeling and Synthesis
TLDR
Experimental results show that the quality of motions generated by the network is superior to the motions of state-of-the-art human motion synthesis methods, and it runs fast to synthesize different types of motions on-line. Expand
Few‐shot Learning of Homogeneous Human Locomotion Styles
TLDR
This paper proposes a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained, and introduces a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style. Expand
PMnet: Learning of Disentangled Pose and Movement for Unsupervised Motion Retargeting
TLDR
This paper develops a novel architecture referred to as the pose-movement network (PMnet), which separately learns frame-by-frame poses and overall movement and introduces a novel loss function that allows PMnet to properly retarget the poses and Overall movement. Expand
Skeleton-aware networks for deep motion retargeting
TLDR
This work introduces a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs, and is the first to perform retargeted between skeletons with differently sampled kinematic chains, without any paired examples. Expand
Self-Supervised Motion Retargeting with Safety Guarantee
TLDR
Self-supervised shared latent embedding (S3LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos, significantly alleviates the necessity of time-consuming data-collection via novel paired data generating processes. Expand
Constructing Human Motion Manifold With Sequential Networks
TLDR
A novel recurrent neural network‐based method to construct a latent motion manifold that can represent a wide range of human motions in a long sequence and proposes a set of loss terms that improve the overall quality of the motion manifold from various aspects. Expand
MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency
TLDR
MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video, is introduced, the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation. Expand
Motion Retargetting based on Dilated Convolutions and Skeleton‐specific Loss Functions
TLDR
This paper presents a motion retargetting model based on temporal dilated convolutions that generates realistic motions for various humanoid characters in an unsupervised manner and demonstrates the effectiveness and robustness of the method. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 52 REFERENCES
Learning character-agnostic motion for motion retargeting in 2D
TLDR
This paper presents a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters, and demonstrates that this framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeted methods, when applied to videos in-the-wild. Expand
A deep learning framework for character motion synthesis and editing
TLDR
A framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset, can produce smooth, high quality motion sequences without any manual pre-processing of the training data. Expand
Deep Representation Learning for Human Motion Prediction and Classification
TLDR
The results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction. Expand
On Human Motion Prediction Using Recurrent Neural Networks
TLDR
It is shown that, surprisingly, state of the art performance can be achieved by a simple baseline that does not attempt to model motion at all, and a simple and scalable RNN architecture is proposed that obtains state-of-the-art performance on human motion prediction. Expand
Realtime style transfer for unlabeled heterogeneous human motion
TLDR
A novel solution for realtime generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles and introduces an efficient local regression model to predict the timings of synthesized poses in the output style. Expand
Recurrent Network Models for Human Dynamics
TLDR
The Encoder-Recurrent-Decoder (ERD) model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers that extends previous Long Short Term Memory models in the literature to jointly learn representations and their dynamics. Expand
Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis
TLDR
This work is the first to the knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles. Expand
On-line motion retargetting
  • Kwang-Jin Choi, Hyeongseok Ko
  • Computer Science
  • Proceedings. Seventh Pacific Conference on Computer Graphics and Applications (Cat. No.PR00293)
  • 1999
TLDR
Experiments prove that the retargetting algorithm preserves the high frequency details of the original motion quite accurately, and can be used to reduce measurement errors in restoring captured motion. Expand
Learning human behaviors from motion capture by adversarial imitation
TLDR
Generative adversarial imitation learning is extended to enable training of generic neural network policies to produce humanlike movement patterns from limited demonstrations consisting only of partially observed state features, without access to actions, even when the demonstrations come from a body with different and unknown physical parameters. Expand
Time-Contrastive Networks: Self-Supervised Learning from Video
TLDR
A self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints is proposed, and it is demonstrated that this representation can be used by a robot to directly mimic human poses without an explicit correspondence, and that it can be use as a reward function within a reinforcement learning algorithm. Expand
...
1
2
3
4
5
...