• Corpus ID: 227239348

UniCon: Universal Neural Controller For Physics-based Character Motion

@article{Wang2020UniConUN,
  title={UniCon: Universal Neural Controller For Physics-based Character Motion},
  author={Tingwu Wang and Yunrong Guo and Maria Shugrina and Sanja Fidler},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.15119}
}
The field of physics-based animation is gaining importance due to the increasing demand for realism in video games and films, and has recently seen wide adoption of data-driven techniques, such as deep reinforcement learning (RL), which learn control from (human) demonstrations. While RL has shown impressive results at reproducing individual motions and interactive locomotion, existing methods are limited in their ability to generalize to new motions and their ability to compose a complex… 
Physics-based Human Motion Estimation and Synthesis from Videos
TLDR
This work proposes a framework for training generative models of physically plausible human motion directly from monocular RGB videos, which are much more widely available, and achieves both qualitatively and quantitatively significantly improved motion estimation, synthesis quality and physical plausibility.
Interactive Characters for Virtual Reality Stories
TLDR
This work reviews the main assumptions of this approach and recent progress in interactive character animation techniques that seems promising to realise this goal of narrative-focused VR interactive content.
Physics Engines as Cognitive Models of Intuitive Physical Reasoning
Many studies have claimed that human physical reasoning consists largely of running “physics engines in the head” in which the future trajectory of the physical system under consideration is computed
Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation
TLDR
The ability to estimate physically-plausible 3D human-object interactions using a single wearable camera and factoring in the 6DoF pose of objects in the scene is demonstrated for the first time.
SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning
  • 2021

References

SHOWING 1-10 OF 76 REFERENCES
DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills
TLDR
This work shows that well-known reinforcement learning methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals.
SFV: Reinforcement Learning of Physical Skills from Videos
TLDR
This paper proposes a method that enables physically simulated characters to learn skills from videos (SFV), based on deep pose estimation and deep reinforcement learning, that allows data-driven animation to leverage the abundance of publicly available video clips from the web, such as those from YouTube.
Character controllers using motion VAEs
TLDR
This work uses deep reinforcement learning to learn controllers that achieve goal-directed movements in data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs.
Guided Learning of Control Graphs for Physics-Based Characters
TLDR
This work presents a method for learning robust feedback strategies around given motion capture clips as well as the transition paths between clips, and develops a synthesis framework for the development of robust controllers with a minimal amount of prior knowledge.
Learning predict-and-simulate policies from unorganized human motion data
TLDR
A novel network-based algorithm that learns control policies from unorganized, minimally-labeled human motion data that learns a variety of dynamic motor skills from large, unorganized data and react to unexpected perturbation beyond the scope of the training data.
Physics-based motion capture imitation with deep reinforcement learning
TLDR
A deep reinforcement learning method that learns to control articulated humanoid bodies to imitate given target motions closely when simulated in a physics simulator is introduced and it is demonstrated that the proposed method can control the character to imitate a wide variety of motions.
Interactive character animation by learning multi-objective control
TLDR
An approach that learns to act from raw motion data for interactive character animation and a new data augmentation method that allows the model to be learned even from a small to moderate amount of training data is presented.
A scalable approach to control diverse behaviors for physically simulated characters
TLDR
A technique for learning controllers for a large set of heterogeneous behaviors by dividing a reference library of motion into clusters of like motions, which is able to construct experts, learned controllers that can reproduce a simulated version of the motions in that cluster.
Model Predictive Control with a Visuomotor System for Physics-based Character Animation
This article presents a Model Predictive Control framework with a visuomotor system that synthesizes eye and head movements coupled with physics-based full-body motions while placing visual attention
Neural state machine for character-scene interactions
TLDR
The proposed Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions, and introduces a control scheme that combines egocentric inference and goal-centric inference.
...
1
2
3
4
5
...