• Corpus ID: 93003807

Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis

@article{Li2017AutoConditionedLN,
  title={Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis},
  author={Zimo Li and Yi Zhou and Shuangjiu Xiao and Chong He and Hao Li},
  journal={ArXiv},
  year={2017},
  volume={abs/1707.05363}
}
We present a real-time method for synthesizing highly complex human motions using a novel LSTM network training regime we call the auto-conditioned LSTM (acLSTM. [] Key Method Furthermore, the structure of the acLSTM is modular and compatible with any other recurrent network architecture, and is usable for tasks other than motion. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.

Figures and Tables from this paper

A Causal Convolutional Neural Network for Motion Modeling and Synthesis

Experimental results show that the quality of motions generated by the network is superior to the motions of state-of-the-art human motion synthesis methods, and it runs fast to synthesize different types of motions on-line.

Few‐shot Learning of Homogeneous Human Locomotion Styles

This paper proposes a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained, and introduces a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style.

Mode-adaptive neural networks for quadruped motion control

This paper proposes a novel neural network architecture called Mode-Adaptive Neural Networks for controlling quadruped characters and shows that this architecture is suitable for encoding the multi-modality of quadruped locomotion and synthesizing responsive motion in real-time.

Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation

A weakly supervised deep recurrent method for real-time basic dance generation with audio power spectrum as input and generates basic dance steps with low cross entropy and maintains an F-measure score similar to that of a baseline dancer.

Variational Interpolating Neural Networks for Locomotion Synthesis

This work proposes a novel approach to high-quality, interactive and variational motion synthesis that successfully integrated concepts of variational autoencoders in a fully-connected network and can generate smooth animations including highly visible temporal and spatial variations.

Neural Kinematic Networks for Unsupervised Motion Retargetting

A recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting and works online, i.e., it adapts the motion sequence on-the-fly as new frames are received.

Robust motion in-betweening

A novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks that synthesises high-quality motions that use temporally-sparse keyframes as animation constraints is presented.

Long-Term Human Motion Prediction by Modeling Motion Context and Enhancing Motion Dynamic

A motion context modeling by summarizing the historical human motion with respect to the current prediction is proposed, and a modified highway unit (MHU) is proposed for efficiently eliminating motionless joints and estimating next pose given the motion context.

Modeling Human Motion with Quaternion-Based Neural Networks

QuaterNet represents rotations with quaternions and the loss function performs forward kinematics on a skeleton to penalize absolute position errors instead of angle errors and it is shown that the standard evaluation protocol for Human3.6M produces high variance results and a simple solution is proposed.

Centralized Networks to Generate Human Body Motions

The model for learning human body motion from markers’ trajectories is used and it is found that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that the technique seems to be capable of correcting for missing information resulting from sparse control marker settings.

References

SHOWING 1-10 OF 48 REFERENCES

On Human Motion Prediction Using Recurrent Neural Networks

It is shown that, surprisingly, state of the art performance can be achieved by a simple baseline that does not attempt to model motion at all, and a simple and scalable RNN architecture is proposed that obtains state-of-the-art performance on human motion prediction.

Realtime style transfer for unlabeled heterogeneous human motion

A novel solution for realtime generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles and introduces an efficient local regression model to predict the timings of synthesized poses in the output style.

Phase-functioned neural networks for character control

A real-time character control mechanism using a novel neural network architecture called a Phase-Functioned Neural Network that takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high quality motions that achieve the desired user control.

A deep learning framework for character motion synthesis and editing

A framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset, can produce smooth, high quality motion sequences without any manual pre-processing of the training data.

Interactive Control of Diverse Complex Characters with Neural Networks

A method for training recurrent neural networks to act as near-optimal feedback controllers that is able to generate stable and realistic behaviors for a range of dynamical systems and tasks does not require motion capture or task-specific features or state machines.

Deep Representation Learning for Human Motion Prediction and Classification

The results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.

Recurrent Network Models for Human Dynamics

The Encoder-Recurrent-Decoder (ERD) model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers that extends previous Long Short Term Memory models in the literature to jointly learn representations and their dynamics.

Generating Text with Recurrent Neural Networks

The power of RNNs trained with the new Hessian-Free optimizer by applying them to character-level language modeling tasks is demonstrated, and a new RNN variant that uses multiplicative connections which allow the current input character to determine the transition matrix from one hidden state vector to the next is introduced.

Online control of simulated humanoids using particle belief propagation

A novel, general-purpose Model-Predictive Control algorithm that combines multimodal, gradient-free sampling and a Markov Random Field factorization to effectively perform simultaneous path finding and smoothing in high-dimensional spaces is presented.

On-line locomotion generation based on motion blending

This work proposes a novel approach for on-the-fly generation of convincing locomotion, given parameters such as speed, turning angle, and style, on top of others given in the previous approaches.