Skip to main content
SearchLoginLogin or Signup

Daim™: AIAI ^^

Music submission for AIMC 2024

Published onAug 29, 2024
Daim™: AIAI ^^
·

Title

Daim™: AIAI ^^

https://aimc2024.pubpub.org/pub/d9jivdh6/draft?access=9307mxn9

Project Description

Daim™ is a transdisciplinary music project initiated in 2016 by artist-researchers Axel Chemla–Romeu-Santos, Kevin Raharolahy and Hugo Scurto, blending electronic music and sound art practices through post-internet memeification and maximalist aesthetics. Our music productions draw from beat-based, hyperpop and experimental music, but also from video games, doomscrolling and advertising. Our live performances were featured at Ars Electronica 24h Rave (2021) and ACIDS Workshop Gamma (2023), and mix codes of clubbing and improvised music, by immersing audiences in musical cut-ups of miscellaneous genres and memetic audio/visuals, reminiscent of Internet aesthetics.

Our recent work focused on interconnecting AI music with sound art practice, by steering AI music tools outside their engineered aesthetics. For this end, we practiced with both prompt-based AI audio generators from the industry, and custom deep generative models for real-time interaction. On the one hand, we incorporated raw and network-bended generations into MusicGen, Whisper and StableAudio, to make audible the sounds materially produced by these platforms. On the other hand, we trained IRCAM’s vschaos2 and RAVE models on Daim™’s full discography to create a “sonic doppelgänger” of our collective practice, which we encounter during live performances, either through latent improvisation with gamepad controllers, or audio reconstruction mixing with a DJ booth. Through this palette of sonic practices, we invite the audience to a bodily experience of AI music, which questions our expectations on what is worth listening to, and offering as performance, in AI music.

For AIMC 2024, we propose AIAI ^^, a length-adaptable reworking of our most recent music performance at ACIDS Workshop Gamma (see video documentation). We envision performing it during the club night at the Old Fire Station. We may complement our music performance with an original VJset, should the space provide corresponding resources (see technical requirements).

Type of submission

Performance 3: a club night at the Old Fire Station.

Technical/Stage Requirements

Our performance setup consists in laptops & controllers on a table, with two performers sitting or standing behind, and an optional projection screen.

Brought by Daim™: 

  • Laptops

  • Audio card

  • DJ booth

  • Gamepad controllers

Required from AIMC 2024: 

  • Sound system (stereo or more, subwoofers very welcomed)

  • 2 monitor speakers (1 possible)

  • Videoprojector & projection screen (optional)

Program Notes

Daim™ is a transdisciplinary music project initiated in 2016 by artist-researchers Axel Chemla–Romeu-Santos, Kevin Raharolahy and Hugo Scurto, interconnecting electronic music and sound art practices through post-internet memeification and maximalist aesthetics. For AIMC 2024, they practiced with prompt-based AI audio generators from GAFAM and custom deep audio generative models, hijacking them through network bending and latent gamepad improvisations. The resulting audiovisual performance, AIAI ^^, invites the audience to a bodily experience of AI music, which questions our expectations on what is worth listening to in AI music, by summoning Internet and beat-based aesthetics.

Media

Daim™ @ ACIDS Workshop Gamma (Dec 2023).

Promotional artwork for Daim™ (generated with AI).

Ethics Statement

Our music submission makes use of prompt-based AI audio generators from the industry. As such, it could raise ethical issues related to datasets and copyright. We believe that our network bending practice with these AI audio generators somehow counterbalances this societal impact, by making the sounds always already produced by these computational platforms audible, rather than seeking to realistically reproduce existing music genres.

Also, our music submission makes use of deep audio generative models from IRCAM. Since we consented to use our full discography as training dataset, the resulting custom generative models may not raise ethical issues related to copyright. Yet, we acknowledge that training these generative models had a certain environmental impact. We relied on a private infrastructure to conduct these trainings, which has a carbon efficiency of 0.432 kgCO2eq/kWh. A cumulative of 1080 hours of computation was performed on GPU hardware of type Titan V (TDP of 250W). Total emissions are estimated to be 116.64 kgCO2eq of which 0 percent were directly offset. These estimations were conducted using the MachineLearning Impact calculator presented in Lacoste et al., 2019.

Lastly, our music submission interconnects AI music with Internet and clubbing aesthetics through its sonic palette and its use of gamepad controllers. We suggest that these experiments may promote socially-inclusive practices of AI music, by broadening the perspectives from which AI music can be listened to, while putting bodies back at the heart of AI performance and audience reception.

Acknowledgements

Axel, Kevin and Hugo would like to thank IRCAM and the ACIDS & ISMM research groups for their technical support, as well as Sam Wahl, Minh Boutin, Adèle Pécout and the w.lfg.ng collective for their musical contributions.

Comments
0
comment
No comments here
Why not start the discussion?