Abstract
To generate dance that temporally and aesthetically matches the music is a challenging problem in three aspects. First, the generated motion should be beats-aligned to the local musical features. Second, the global aesthetic style should be matched between motion and music. And third, the generated motion should be diverse and non-self-repeating. To address these challenges, we propose ReChoreoNet, which re-choreographs high-quality dance motion for a given piece of music. A data-driven learning strategy is proposed to efficiently correlate the temporal connections between music and motion in a progressively learned cross-modality embedding space. The beats-aligned content motion will be subsequently used as autoregressive context and control signal to control a normalizing-flow model, which transfers the style of a prototype motion to the final generated dance. In addition, we present an aesthetically labelled music-dance repertoire (MDR) for both efficient learning of the cross-modality embedding, and understanding of the aesthetic connections between music and motion. We demonstrate that our repertoire-based framework is robustly extensible in both content and style. Both quantitative and qualitative experiments have been carried out to validate the efficiency of our proposed model.
| Original language | English |
|---|---|
| Pages (from-to) | 771-781 |
| Number of pages | 11 |
| Journal | Machine Intelligence Research |
| Volume | 21 |
| Issue number | 4 |
| Early online date | 29 May 2024 |
| DOIs | |
| Publication status | Published - Aug 2024 |
User-Defined Keywords
- Generative model
- cross-modality learning
- normalizing flow
- style transfer
- tempo synchronization
Fingerprint
Dive into the research topics of 'ReChoreoNet: Repertoire-based Dance Re-choreography with Music-conditioned Temporal and Style Clues'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver