Abstract
A number of machine learning techniques have transitioned from computer science test cases into viable artistic tools over the past five years. Software such as RunwayML have aggregated machine learning tools to provide competitive alternatives to traditional video editing techniques and NVIDIA’s StyleGAN2 (2020) is now a popular way for artists to explore the aesthetic dimension of machine learning. However, there remains a gap between the techniques developed by computer scientists and applications that are useful and interesting for contemporary creative practitioners, often because the datasets on which these models are trained are expressively limited or somewhat clichéd, or the implementations have not been converted from test cases into accessible tools. This chapter describes a collaborative project between artists and computer scientists which sought to produce a machine learning model that could predict the movements of a dancer as she improvised with a violin player during a live performance. The challenge in this project was to produce a tool capable of performing this function but also to present it to an audience in a way where the significance of the innovation could be appreciated on an artistic, cultural, and emotional level. Transflower (Valle-Pérez 2016) is an autoregressive dance generation system trained on the AIST dataset, which comprises simple dance movements paired to standardised samples of pop music, such as hip-hop and disco. When we first examined the Transflower and the AIST dataset the system’s inability to generate interesting outputs became evident. Neither the generated motion nor the paired music samples appealed on a cultural level, seeming clinical, clichéd and devoid of expressive content. In this chapter, we describe how our interdisciplinary team developed a new dataset, known as the SudheeSet (named after our dancer Sudhee Liao), combining motion contemporary dance capture data with violin music. The SudheeSet provided an entirely new way to train the Transflower model into what we call the TransSudhee model. The TransSudhee model formed the basis of our choreography for the 2021 Descendent performance, which presented audiences with a form of digital doubling that approached a Turing test, where the performative agency of the human dancer and her machine learning counterpart was contrasted, blurred and inverted. Inspired by how NVIDIA’s StyleGAN 2 images of ‘non-existent humans’ articulated a new form of cultural agency, Descendent and the TransSudhee system sought to convert the potential of the Transflower system into an artistic tool that could allow audiences to appreciate the implications of machine learning on a cultural and aesthetic level. This chapter will explore questions we encountered with working with machine learning as a live performance tool in the Descendent project. To what degree does dancing with an avatar based on your own dataset of movement facilitate a form of self-interaction or technological embodiment? What sort of technological feedback loops can we create when a dancer and violinist can work dynamically with a generative machine-learning tool on stage, and to what degree can the performers and the audience anthropomorphise the synthesised motion that the system produces? In addition to these performance-based questions, we will show how the pragmatic utilisation of a machine learning model as a performance tool revealed a number of new perspectives on the tool itself, which led to new research directions in computer science and motion synthesis. These questions and their answers were only revealed the ability for artists and computer scientists to collaborate together and to mutually test each others assumptions using their unique methodologies.
Original language | English |
---|---|
Title of host publication | Choreomata: Performance and Performativity after AI |
Editors | Marek Poliks, Roberto Alonso Trillo |
Publisher | CRC Press |
Edition | 1st |
ISBN (Print) | 9781032319919, 9781032319988 |
Publication status | Accepted/In press - 25 Jul 2023 |