Corpus ID: 219573826

Dance Revolution: Long Sequence Dance Generation with Music via Curriculum Learning

@article{Huang2020DanceRL,
  title={Dance Revolution: Long Sequence Dance Generation with Music via Curriculum Learning},
  author={Ruozi Huang and Huang Hu and Wei Wu and Kei Sawada and Mi Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.06119}
}
Dancing to music is one of human's innate abilities since ancient times. In artificial intelligence research, however, synthesizing dance movements (complex human motion) from music is a challenging problem, which suffers from the high spatial-temporal complexity in human motion dynamics modeling. Besides, the consistency of dance and music in terms of style, rhythm and beat also needs to be taken into account. Existing works focus on the short-term dance generation with music, e.g. less than… Expand
Learn to Dance with AIST++: Music Conditioned 3D Dance Generation
TLDR
A transformer-based learning framework for 3D dance generation conditioned on music that combines a deep crossmodal transformer, which well learns the correlation between the music and dance motion; and the full-attention with future-N supervision mechanism which is essential in producing long-range non-freezing motion. Expand
Synthesizing Realistic Human Dance Motions Conditioned by Musical Data using Graph Convolutional Networks
  • João Pedro Moreira Ferreira, Renato Martins, E. R. Nascimento
  • Computer Science
  • Anais do XXXIV Concurso de Teses e Dissertações da SBC (CTD-SBC 2021)
  • 2021
TLDR
This work designs a novel method based on GCNs to tackle the problem of automatic dance generation from audio using an adversarial learning scheme conditioned on the input music audios to create natural motions. Expand
Training Physics-based Controllers for Articulated Characters with Deep Reinforcement Learning
In this thesis, two different applications are discussed for using machine learning techniques to train coordinated motion controllers in arbitrary characters in absence of motion capture data. TheExpand
Curriculum Learning: A Survey
TLDR
This survey shows how limits have been tackled in the literature, and presents different curriculum learning instantiations for various tasks in machine learning, and constructs a multi-perspective taxonomy of curriculum learning approaches by hand, considering various classification criteria. Expand

References

SHOWING 1-10 OF 60 REFERENCES
Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation
TLDR
A weakly supervised deep recurrent method for real-time basic dance generation with audio power spectrum as input and generates basic dance steps with low cross entropy and maintains an F-measure score similar to that of a baseline dancer. Expand
Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis
TLDR
This work introduces a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music, and introduces a constraint-based dynamic programming procedure that considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. Expand
Music similarity-based approach to generating dance motion sequence
TLDR
This paper proposes a novel approach to generating a sequence of dance motions using music similarity as a criterion to find the appropriate motions given a new musical input, and evaluates the system’s performance using a user study. Expand
Music Transformer
TLDR
It is demonstrated that a Transformer with the modified relative attention mechanism can generate minute-long compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. Expand
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis
TLDR
This work is the first to the knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles. Expand
Convolutional Sequence Generation for Skeleton-Based Action Synthesis
TLDR
The results show that the proposed framework, named Convolutional Sequence Generation Network (CSGN), can produce long action sequences that are coherent across time steps and among body parts. Expand
Learning Human Motion Models for Long-Term Predictions
TLDR
The Dropout Autoencoder LSTM (DAELSTM), a new architecture for the learning of predictive spatio-temporal motion models from data alone, is capable of synthesizing natural looking motion sequences over long-time horizons without catastrophic drift or motion degradation. Expand
Everybody Dance Now
This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutesExpand
Convolutional Sequence to Sequence Model for Human Dynamics
TLDR
This work presents a novel approach to human motion modeling based on convolutional neural networks (CNN), which is able to capture both invariant and dynamic information of human motion, which results in more accurate predictions. Expand
Attention-Based Models for Speech Recognition
TLDR
The attention-mechanism is extended with features needed for speech recognition and a novel and generic method of adding location-awareness to the attention mechanism is proposed to alleviate the issue of high phoneme error rate. Expand
...
1
2
3
4
5
...