Skip to main content
SearchLoginLogin or Signup

Breathless

Published onAug 29, 2024
Breathless
·

AIMC 2024 Music Submission Template

Title

Breathless

https://aimc2024.pubpub.org/pub/gnte21h4/draft?access=rxicfqg5

Project Description


Rooted in 2 years of experimentation with biofeedback musicking to carry affective registers, Breathless is a live performance designed to convey affective resonances of my lived experience of migration in exile. In doing so, the project initially draws on the principles of the field of emotional AI that leverages the power of deep learning algorithms to classify emotions to subsequently subvert the pragmatism of the field by building an immersive performance out of the training output supplemented with live data. Such subversion is built on hybridization between computational logic, critical perspectives from social sciences and humanities that comment on non-computability of lived experiences, and creative force of composition. The project, thus, engages with a ‘method assemblage' to develop a computational and creative practice that is not only technically robust but also socially informed, ethically grounded, and deeply resonant with human experience.

In its essence, the project emerges from a commitment of audio recording and logging my biosensor data while watching socio-politically triggering content tied to my experience of migration and exile. The audio recording captured expressive gestures. i.e., changes in the breathing rhythm, sniffling, gasping, or sighing but also my mouse clicks, thumps on the keyboard as well as occasional sound leaked from the content. Sensor data, on the other hand, captured my stress response through 4 sensors couched in the wireless wristband: Galvanic Skin Response, considered the hallmark of stress indication, detecting unnoticeable micro-perspiration of skin, Blood Volume Pulse detecting irregularities in heartbeats, accelerometer detecting acceleration of motion or movement, and thermometer detecting the skin temperature which fluctuates based on the stress level. To process these two libraries, I devised two separate strategies: first, using RAVE, a generative machine learning algorithm for live audio synthesis to generate outputs based on the breathing library containing more than 5 hours of recording; and second, a variational autoencoder (VAE) algorithm to make operational use of the sensor data library towards sonification. These two streams of output were then intertwined towards a final composition modulated by live performance data.

This proposal, thus, articulates a creative vision for an integrative, nuanced approach to understanding and engaging with human emotions through music in the age of artificial intelligence. It calls for a reorientation away from big data practices of emotional AI towards methodologies that embrace the full material, socio-political, and experiential dimensions of emotional experience. Through a synthesis of musicking, interdisciplinary exploration, and critical reflexivity, this proposal aims to retrace the path from statistical operations and apparatuses of capture under the banner of AI to land back on and reactivate complex material forces that feeds their operation—in this case emotional tones collected from the body.

Type of submission

Accepted submissions will be programmed at AIMC 2024 in one of three performances, please indicate which Performance(s) your submission is suitable for:

  • Performance 1, a concert at the Wolfson College auditorium, will feature 1-4 professional instrumental players, optional live or fixed electronics and video. Exact instrumentation will be decided based on accepted submissions. Submissions should indicate if AIMC should provide performers.

  • Performance 2, at an Oxford University performance space, will feature a flexible stage and electronics/projected visuals set-up suitable for music with more complex technical requirements. No live performers are provided, but musicians are welcome to perform their own works, or provide their own performers (this should be indicated in the submission)

Technical/Stage Requirements

  • Stereo or multichannel set up with cables. If multichannel then I need an audio interface that preferably doesn’t require driver installation for Mac. If stereo I can bring my own.

  • If possible a connection to WIFI that allows devices to be discoverable (network discovery enabled) or a local network; if not possible I can bring my own router to set up a local network without any requirement on your side.

  • Dim lighting.

Program Notes

Breathless is a performance-based installation designed to communicate psychophysiological states of the artist as a migrant in exile. The piece builds on her 6-month commitment of frequently watching videos of widespread protests in Iran, while audio recording and logging her biological signals to form two distinct databases. The audio recording captured expressive gestures. i.e., changes in the breathing rhythm, sniffling, gasping, or sighing but also her mouse clicks, thumps on the keyboard as well as occasional sound that leaked from videos. Biosignals, on the other hand, captured her stress response through 4 sensors couched in the wearable.
To process these two libraries, she devised two separate strategies using machine learning algorithms, while taking an aesthetic approach that foregrounds the experience of listeners to create an affective piece. This process entails using the raw outputs of the machine alongside melody and harmony information extracted from them. The piece, thus, intertwines the musical incarnations of affective data with live data during the performance to create an evolving piece that contain improvisational tones.

Media

https://drive.google.com/file/d/11i2AOGfpBZeW8DVSscIRYabCBM6bdJ3H/view?usp=sharing

https://drive.google.com/file/d/1VL-djYqVSAPDCXAx0teCZosCTzgwqgFQ/view?usp=sharing

Acknowledgements

  • This work is supported by the Social Sciences & Humanities Research Council of Canada.

Comments
0
comment
No comments here
Why not start the discussion?