Skip to main content
SearchLoginLogin or Signup

>> genesis - v2 <<

Published onAug 29, 2024
>> genesis - v2 <<
·

License: Creative Commons Attribution 4.0 International License (CC-BY 4.0)

Title

>> genesis - v2<<

https://aimc2024.pubpub.org/pub/f98kp7wn/draft?access=gmkecgvc

Project Description

>> genesis - v2 << is a second attempt of using audio generative neural networks without the resort to any external data, amplifying the own materiality of the algorithms towards cybernetic chaos rather than human imitation. 

In the first version of this performance, I developed a novel framework for training neural networks, adapted from my previous works on maximum divergence [1], that I mixed with my other works on network bending, that I applied for live audio generation. To this end, I leveraged a set of eight recurrent agents with various architectures that I train individually to generate the most diverse kind of output. The training process is achieved with separate discriminators, that enforce these agents to maximize different set of inputs through diverse metrics. During live, I perform with these agents through both inputs and network bending techniques that directly alter their inner parameters. The unpredictability of this interaction enforces to give up direct control in favor of a focused listening of each models’ behavior, enriching the relation between the eco-system and myself as a performer. This way, neural audio synthesis is not used to reproduce existing sounds but rather to overdrive their inner signal flows through training in order to generate sounds were human taste and selection only occur at performance time, yielding to a really balanced co-interaction with artificial intelligence. 

In this second version, I will to pursue this work by working on several aspects I couldn’t investigate more deeply in the previous version. First, I would like also to expand this method to video generation, projected on stage, that would involve reactive no-input image generation with several models with light architecture. Secondly, I would work further on the temporality and inter-connectivity of these agents, that I incorporated in the first version but didn’t have the time to emphasize and integrate in the training procedure. While this research would not alter drastically the overall performance, it could really give this project a greater artistic & scientific value, that I would be very proud to share with the AIMC community.  

[1] Chemla—Romeu-Santos A. & Esling, P. (2022). Challenges in creative generative models for music: a divergence maximization perspective. AIMC2022

Type of submission

Performance 2

Technical/Stage Requirements

This performance requires :

  • a laptop with Max/MSP (brought)

  • a controller (brought)

  • loudspeakers (>2)

  • a projector (optional)

The performance can be interestingly spatialized (and adapted to the hall) to convey a spatial individuality to the different agents, and then increase the immersive aspect of the performance. In this case, the piece may require a short adaptation time in the hall. The video part would only require the HDMI output (maybe 2 if possible) of the laptop. The duration is arbitrary, and can be adapted to the requirements of the show.

Program Notes

>> genesis - v2 << is a first attempt of using audio generative neural networks without the resort to any human data, amplifying the own materiality of the algorithms towards cybernetic chaos rather than nature imitation. While the use of neural generative modelling for reproduction becomes invasive in our lives, their specific materiality is largely unknown, unshown, and even hidden, as if its revelation would be the unveilement of a magic trick. However, what could happen if we take the opposite position, that is enforcing individuality and emergence away from any human data and judgements? While learning without data can seem non-sense, I tried to overcome this apparent paradox by rather by giving to a set of different agents and set of structural (and often impossible) objectives, providing the genesis of an eco-systemic equilibrium that I can influence by direct manipulation through network bending techniques.

Media

Below is the binaural reduction of >>genesis - v1<< performed during Sonic Experiments 2023 at ZKM.

Acknowledgements

The first step of this work was developed within the Sonic Experiments curation of ZKM in 2023, and an independant patch version received the 3th prize at Forum IRCAM Generative Contest 2024.

Comments
0
comment
No comments here
Why not start the discussion?