TY - GEN
T1 - Reflexive loopers for solo musical improvisation
AU - Pachet, François
AU - Roy, Pierre
AU - Moreira, Julian
AU - D'Inverno, Mark
N1 - This work was partially funded by the ERC FlowMachines grant. We thank Ed Jones, Jeff Suzda and Fiammetta Ghedini for their fruitful comments.
Publisher copyright:
© 2013 ACM
PY - 2013/4/27
Y1 - 2013/4/27
N2 - Loop pedals are real-time samplers that playback audio played previously by a musician. Such pedals are routinely used for music practice or outdoor "busking". However, loop pedals always playback the same material, which can make performances monotonous and boring both to the musician and the audience, preventing their widespread uptake in professional concerts. In response, we propose a new approach to loop pedals that addresses this issue, which is based on an analytical multi-modal representation of the audio input. Instead of simply playing back prerecorded audio, our system enables real-time generation of an audio accompaniment reacting to what is currently being performed by the musician. By combining different modes of performance - e.g. bass line, chords, solo - from the musician and system automatically, solo musicians can perform duets or trios with themselves, without engendering the so-called canned (boringly repetitive and unresponsive) music effect of loop pedals. We describe the technology, based on supervised classification and concatenative synthesis, and then illustrate our approach on solo performances of jazz standards by guitar. We claim this approach opens up new avenues for concert performance.
AB - Loop pedals are real-time samplers that playback audio played previously by a musician. Such pedals are routinely used for music practice or outdoor "busking". However, loop pedals always playback the same material, which can make performances monotonous and boring both to the musician and the audience, preventing their widespread uptake in professional concerts. In response, we propose a new approach to loop pedals that addresses this issue, which is based on an analytical multi-modal representation of the audio input. Instead of simply playing back prerecorded audio, our system enables real-time generation of an audio accompaniment reacting to what is currently being performed by the musician. By combining different modes of performance - e.g. bass line, chords, solo - from the musician and system automatically, solo musicians can perform duets or trios with themselves, without engendering the so-called canned (boringly repetitive and unresponsive) music effect of loop pedals. We describe the technology, based on supervised classification and concatenative synthesis, and then illustrate our approach on solo performances of jazz standards by guitar. We claim this approach opens up new avenues for concert performance.
KW - Classification
KW - Loop pedals
KW - Music interaction
KW - Synthesis
UR - http://www.scopus.com/inward/record.url?scp=84877973992&partnerID=8YFLogxK
U2 - 10.1145/2470654.2481303
DO - 10.1145/2470654.2481303
M3 - Conference proceeding
AN - SCOPUS:84877973992
SN - 9781450318990
T3 - Conference on Human Factors in Computing Systems - Proceedings
SP - 2205
EP - 2208
BT - CHI 2013: Changing Perspectives, Conference Proceedings - The 31st Annual CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery (ACM)
T2 - 31st Annual CHI Conference on Human Factors in Computing Systems: Changing Perspectives, CHI 2013
Y2 - 27 April 2013 through 2 May 2013
ER -