Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure

  title={Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure},
  author={Andreas Aristidou and Anastasios Yiannakidis and Kfir Aberman and Daniel Cohen-Or and Ariel Shamir and Yiorgos Chrysanthou},
  journal={IEEE transactions on visualization and computer graphics},
Synthesizing human motion with a global structure, such as a choreography, is a challenging task. Existing methods tend to concentrate on local smooth pose transitions and neglect the global context or the theme of the motion. In this work, we present a music-driven motion synthesis framework that generates long-term sequences of human motions which are synchronized with the input beats, and jointly form a global structure that respects a specific dance genre. In addition, our framework enables… 

Music-driven Dance Regeneration with Controllable Key Pose Constraints

A novel framework for music-driven dance motion synthesis with controllable key pose constraint is proposed, which involves two single-modal transformer encoders for music and initial seed motion embedding, and a cross- modal transformer decoder for motion generation controlled by key pose constraints.

Let’s All Dance: Enhancing Amateur Dance Motions

A model that enhances professionalism to amateur dance movements, allowing the movement quality to be improved in both the spatial and temporal domains is presented.

Rhythmic Gesticulator

A novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics, and builds correspondence between the hierarchical embeddings of the speech and the motion, resulting in rhythm- and semantics-aware gesture synthesis.

Human Motion Diffusion Model

Motion Diffusion Model (MDM), a carefully adapted classifier-free diffusion-based generative model for the human motion domain, is introduced, a transformer-based approach, enabling different modes of conditioning, and different generation tasks.

NEURAL MARIONETTE: A Transformer-based Multi-action Human Motion Synthesis System

A neural network-based system for long-term, multi-action human motion synthesis that can produce high-quality and meaningful motions with smooth transitions from simple user input, including a sequence of action tags with expected action duration, and optionally a hand-drawn moving trajectory if the user specifies.

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis

MoFusion is introduced, a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text).

GANimator: Neural Motion Synthesis from a Single Sequence

GANimator is a generative model that learns to synthesize novel motions from a single, short motion sequence, enabling novel motion synthesis for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more.

MotionCLIP: Exposing Human Motion Generation to CLIP Space

Although CLIP has never seen the motion domain, MotionCLIP offers unprecedented text-to-motion abili-ties, allowing out-of-domain actions, disentangled editing, and abstract language specification.

MoDi: Unconditional Motion Synthesis from Diverse Data

This work presents MoDi – a generative model trained in an unsupervised setting from an extremely diverse, unstructured and unlabeled dataset that yields a well-behaved and highly structured latent space, which can be semantically clustered, constituting a strong motion prior that facilitates various applications including semantic editing and crowd simulation.

Pose Representations for Deep Skeletal Animation

This work addresses the fundamental problem of developing a robust pose representation for motion, suitable for deep skeletal animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics.



ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit

A two-stage music-to-dance synthesis framework ChoreoNet to imitate human choreography procedure, which firstly devises a CAU prediction model to learn the mapping relationship between music and CAU sequences, and devise a spatial-temporal inpainting model to convert the CAU sequence into continuous dance motions.

Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis

This work introduces a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music, and introduces a constraint-based dynamic programming procedure that considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence.

Learning to Generate Diverse Dance Motions with Transformer

This work introduces a complete system for dance motion synthesis, which can generate complex and highly diverse dance sequences given an input music sequence, and presents a novel two-stream motion transformer generative model that can generate motion sequences with high flexibility.

DanceDJ: A 3D Dance Animation Authoring System for Live Performance

The DanceDJ is a proposed system that allows DJs to transfer their skills from music control to dance control using a similar hardware setup, and map different motion control functions onto the DJ controller, and visualize the timing of natural connection points, such that the DJ can effectively govern the synthesized dance motion.

Dance Revolution: Long Sequence Dance Generation with Music via Curriculum Learning

A novel seq2seq architecture for long sequence dance generation with music, which consists of a transformer based music encoder and a recurrent structure based dance decoder is proposed, which significantly outperforms existing methods on both automatic metrics and human evaluation.

Dancing‐to‐Music Character Animation

A new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance, which creates dance performance as if a character was listening and expressively dancing to the music.

Automated choreography synthesis using a Gaussian process leveraging consumer-generated dance motions

A probabilistic model which maps beat structures to dance movements using a Gaussian process, trained with a large amount of consumer-generated dance motion obtained from the web is proposed.

Automatic Choreography Generation with Convolutional Encoder-decoder Network

The results show that the proposed model is able to generate musically meaningful and natural dance movements given an unheard song and revealed through quantitative evaluation that the network has created a movement that correlates with the beat of music.

Generative Autoregressive Networks for 3D Dancing Move Synthesis From Music

Experimental results of generated dance sequences from various songs show how the proposed method generates human-like dancing move to a given music, showing that the proposed framework can make a robot to dance just by listening to music.

Dance with Melody: An LSTM-autoencoder Approach to Music-oriented Dance Synthesis

A music-oriented dance choreography synthesis method using a long short-term memory (LSTM)-autoencoder model to extract a mapping between acoustic and motion features that proved to be effective and efficient in synthesizing valid choreographies that are also capable of musical expression.