Real-time controllable motion transition for characters

@article{Tang2022RealtimeCM,
  title={Real-time controllable motion transition for characters},
  author={Xiangjun Tang and He Wang and Bo Hu and Xu Gong and Ruifan Yi and Qilong Kou and Xiaogang Jin},
  journal={ACM Transactions on Graphics (TOG)},
  year={2022},
  volume={41},
  pages={1 - 10}
}
Real-time in-between motion generation is universally required in games and highly desirable in existing animation pipelines. Its core challenge lies in the need to satisfy three critical conditions simultaneously: quality, controllability and speed, which renders any methods that need offline computation (or post-processing) or cannot incorporate (often unpredictable) user control undesirable. To this end, we propose a new real-time transition method to address the aforementioned challenges… 

Figures and Tables from this paper

Motion In-Betweening via Two-Stage Transformers

A deep learning-based framework to synthesize motion in-betweening in a two-stage manner that outperforms the current state-of-the-art methods at a large margin and is also artist-friendly by supporting full and partial pose constraints within the transition, giving artists fine control over the synthesized results.

T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations

This work investigates a simple and must-known con-ditional generative framework based on Vector Quantised-Variational AutoEncoder and Generative Pre-trained Transformer for human motion generation from textural descriptions and shows that a simple CNN-based VQ-VAE with commonly used training recipes allows for high-quality discrete representations.

EDGE: Editable Dance Generation From Music

This work introduces Editable Dance GEneration (EDGE), a state-of-the-art method for editable dance generation that is capable of creating realistic, physically-plausible dances while remaining faithful to the input music.

Defending Black-box Skeleton-based Human Activity Classifiers

The proposed framework, which demonstrates surprising and universal effectiveness across a wide range of skeletal HAR classifiers and datasets, under various attacks, is straightforward but elegant, which turns vulnerable black-box classi-οΏ½ers into robust ones without sacriflcing accuracy.

References

SHOWING 1-10 OF 45 REFERENCES

Robust motion in-betweening

A novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks that synthesises high-quality motions that use temporally-sparse keyframes as animation constraints is presented.

Dynamic Future Net: Diversified Human Motion Generation

Both qualitative and quantitative results show the superiority of the new Dynamic Future Net, a new deep learning model where it explicitly focuses on the aforementioned motion stochasticity by constructing a generative model with non-trivial modelling capacity in temporal stochas-ticity.

Generative Tweening: Long-term Inbetweening of 3D Human Motions

This work introduces a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints.

Character controllers using motion VAEs

This work uses deep reinforcement learning to learn controllers that achieve goal-directed movements in data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs.

Continuous character control with low-dimensional embeddings

A technique that animates characters performing user-specified tasks by using a probabilistic motion model, which is trained on a small number of artist-provided animation clips, which can discover new transitions, tractably precompute a control policy, and avoid low quality poses is presented.

Synthesis of Responsive Motion Using a Dynamic Model

A fully automatic method that learns a nonlinear probabilistic model of dynamic responses from very few perturbed walking sequences and is able to synthesize responses and recovery motions under new perturbations different from those in the training examples.

Single-Shot Motion Completion with Transformer

This work proposes a simple but effective method to solve multiple motion completion problems under a unified framework and achieves a new state of the art accuracy under multiple evaluation settings.

Local motion phases for learning multi-contact character movements

A novel framework to learn fast and dynamic character interactions that involve multiple contacts between the body and an object, another character and the environment, from a rich, unstructured motion capture database is proposed.

A deep learning framework for character motion synthesis and editing

A framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset, can produce smooth, high quality motion sequences without any manual pre-processing of the training data.

Learned motion matching

This work combines the benefits of both approaches and, by breaking down the Motion Matching algorithm into its individual steps, shows how learned, scalable alternatives can be used to replace each operation in turn.