ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters

@article{Yao2022ControlVAEML,
  title={ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters},
  author={Heyuan Yao and Zhenhua Song and Bao Xin Chen and Libin Liu},
  journal={ACM Trans. Graph.},
  year={2022},
  volume={41},
  pages={183:1-183:16}
}
In this paper, we introduce ControlVAE, a novel model-based framework for learning generative motion control policies based on variational autoencoders (VAE). Our framework can learn a rich and flexible latent repre- sentation of skills and a skill-conditioned generative control policy from a diverse set of unorganized motion sequences, which enables the generation of realistic human behaviors by sampling in the latent space and allows high-level control policies to reuse the learned skills to… 

References

SHOWING 1-10 OF 70 REFERENCES

DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills

This work shows that well-known reinforcement learning methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals.

Neural probabilistic motor primitives for humanoid control

A motor architecture that has the general structure of an inverse model with a latent-variable bottleneck is proposed, and it is shown that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space.

CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion

CARL is presented, a quadruped agent that can be controlled with high-level directives and react naturally to dynamic environments and is evaluated by measuring the agent's ability to follow user control and providing a visual analysis of the generated motion to show its effectiveness.

Learning predict-and-simulate policies from unorganized human motion data

A novel network-based algorithm that learns control policies from unorganized, minimally-labeled human motion data that learns a variety of dynamic motor skills from large, unorganized data and react to unexpected perturbation beyond the scope of the training data.

ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters

This work presents a large-scale data-driven framework for learning versatile and reusable skill embeddings for physically simulated characters, and shows that a single pre-trained model can be effectively applied to perform a diverse set of new tasks.

Guided Learning of Control Graphs for Physics-Based Characters

This work presents a method for learning robust feedback strategies around given motion capture clips as well as the transition paths between clips, and develops a synthesis framework for the development of robust controllers with a minimal amount of prior knowledge.

Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning

It is demonstrated that neural network dynamics models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits that accomplish various complex locomotion tasks.

MoGlow: Probabilistic and controllable motion synthesis using normalising flows

Data-driven modelling and synthesis of motion is an active research area with applications that include animation, games, and social robotics. This paper introduces a new class of probabilistic,

Online control of simulated humanoids using particle belief propagation

A novel, general-purpose Model-Predictive Control algorithm that combines multimodal, gradient-free sampling and a Markov Random Field factorization to effectively perform simultaneous path finding and smoothing in high-dimensional spaces is presented.

GANimator: Neural Motion Synthesis from a Single Sequence

GANimator is a generative model that learns to synthesize novel motions from a single, short motion sequence, enabling novel motion synthesis for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more.
...