Abstract
A performance of Archon featuring violinist Roberto Alonso Trillo and Marek Poliks at the Osage Gallery in Hong Kong. Archon is an open-source interface designed for Demiurge, a generative performance ecosystem powered by machine learning (developed in collaboration with Roberto Alonso Trillo). It can be used with any database of lossless audio stored on Google Drive.
Demiurge and Archon were built with the belief that the music of the future will take post-work as a point of departure, supplanting instruments with style transfer, sample library generation, and pattern proliferation, replacing the studio itself with readymade conformity-driven operations. Musical activities will consist of higher-level curatorial actions at the level of genre or mood or intensity. In liberating music from musicians, one both liberates its future from anthropocentric bias and accelerates its death-drive. Demiurge, Hydra, and Archon go nowhere near accomplishing this vision, but are nevertheless committed to its realization.
Archon's function is to mediate between human and database, a neglected instrumentality. Unlike existing work in concatenative synthesis, Archon is both realtime (GPU accelerated) and behaviorally adaptive to audio signal.
Demiurge and Archon were built with the belief that the music of the future will take post-work as a point of departure, supplanting instruments with style transfer, sample library generation, and pattern proliferation, replacing the studio itself with readymade conformity-driven operations. Musical activities will consist of higher-level curatorial actions at the level of genre or mood or intensity. In liberating music from musicians, one both liberates its future from anthropocentric bias and accelerates its death-drive. Demiurge, Hydra, and Archon go nowhere near accomplishing this vision, but are nevertheless committed to its realization.
Archon's function is to mediate between human and database, a neglected instrumentality. Unlike existing work in concatenative synthesis, Archon is both realtime (GPU accelerated) and behaviorally adaptive to audio signal.
Original language | English |
---|---|
Place of Publication | Hong Kong |
Publisher | Osage Gallery |
Media of output | Other |
Publication status | Published - 24 Nov 2022 |
Scopus Subject Areas
- Music
- Visual Arts and Performing Arts
- Artificial Intelligence