Skip to main content
SearchLoginLogin or Signup

Bias II

Published onAug 29, 2024
Bias II
·

Notice: This file is an auto-generated download and, as such, might include minor display or rendering errors. For the version of record, please visit the HTML version or download the PDF.

License: Creative Commons Attribution 4.0 International License (CC-BY 4.0)

Bias II, for piano and interactive music system

https://aimc2024.pubpub.org/pub/18u3y8b2/draft?access=sukunm39

Project Description

Bias II for piano and interactive music system, is part of a series of works engaging with the materiality of ML algorithms and data. The piece uses ML to model interpretative choices made by pianists in past performances, setting performers in an explicit dialogue with the work’s interpretative history. During its interactions with different pianists, the computer music system collects data pertaining to the way performers navigate a set of seven timbral clusters, each consisting of a variable number of musical actions. Based on predictions made by a Recurrent Neural Network (RNN) trained on these data, the computer co-determines the form of the performance, by choosing to follow the musician or propose musical changes. Historical data, collected by the computer music system in past performances, influence future performances of the work, making Bias II a “provisional musical work” that “both retains and blurs the traces and boundaries of individual and collective authorship" (Born 2005, p. 30). Rather than being an independent, self-contained event, each performance of the work is part of a co-creative process that involves both humans and nonhumans and is dispersed in space and time. Machine learning in this context becomes the medium through which traditional notions of musical authorship and the ontology of the musical work are challenged and critically reflected upon.

At the same time, Bias II is an exploration of the materiality and aesthetic affordances of the ML algorithms used in it: a feedforward Neural Network (NN) that assigns incoming sounds to one of the seven timbral clusters in the score (classification), and a Recurrent Neural Network that predicts possible continuations of the performance on the basis of these clusters. The predictions of the machine listening algorithm (feedforward NN) are processed to extract the predominant timbre/cluster over a time window of 1 sec and then fed into the RNN, which predicts which timbre is likely to follow next.

The score of the piece consists of a total of seven clusters of timbrally similar musical actions involving primarily inside-piano playing techniques and string preparations and exploring the extended capabilities of the piano. The performer is free to navigate this timbral space by transitioning freely between the clusters. During the performance, the outputs of the machine listening algorithm are used to match different timbres to different signal processing techniques, but are also fed into the RNN. If the prediction of the RNN differs from what the performer is currently playing, the system responds by playing back prerecorded samples of the predicted cluster.

In order to perform this piece, performers first interact with the system in the context of “training rounds.” These are run-throughs of the piece in which the system reacts to the performer’s actions using signal processing, but does not act proactively (i.e., it does not propose any sound material). Recordings of these “training rounds” are analyzed and added to the dataset used to train the RNN, influencing its behavior in future performances. The piece has so far been performed by pianists Magda Mayas and Xenia Pestova Bennett, whose distinctive interpretations of the score – crystallized in the form of training data – are an integral part of the work.

Born, G. 2005. “On Musical Mediation: Ontology, Technology and Creativity.” Twentieth-Century Music 2(1):7-36.

Type of submission

  • Performance 1, a concert at the Wolfson College auditorium

    * The performer (pianist) should be provided by AIMC.

Technical/Stage Requirements

The following equipment must be provided by the organizers:

Audio equipment:

  • 1 grand piano (Note: The piece makes use of string preparations)

  • 4 loudspeakers (see stage disposition below for details)

  • 1 cardioid condenser microphone for amplification

  • 2 long XLR cables

  • 4 balanced jack cables (to connect the interface outputs to the mixer)

Tables, stands etc.:

  • 2 music stands

  • 1 table big enough to fit a laptop computer and a standard audio interface (next to the mixer desk)

  • 2 power outlets (close to the mixer desk)

Everything else will be provided by the composer.

Stage disposition:

Program Notes

Bias II, for piano and interactive music system

Bias II, for piano and interactive music system, explores distributed and posthuman notions of musical creativity, enabled through processes of datafication and machine learning. During its interactions with different pianists, the computer music system collects data pertaining to the interpretative choices made by performers – specifically, the way performers navigate a set of 7 clusters, each consisting of a variable number of timbrally similar musical actions. Based on predictions made by a Recurrent Neural Network (RNN) using these data, the computer co-determines the form of the performance, by choosing to follow the musician or propose different sound material. Historical data, collected by the computer music system in past performances, influence its future behavior, turning the work into an open-ended co-creative process that is distributed in space and time.

At the time of writing, Bias II has been performed by and trained on data from pianists Magda Mayas and Xenia Pestova-Bennett. The piece was commissioned by and produced at ZKM | Hertz Lab. Research on this work was funded by the ERC advanced grant “MusAI - Music and Artificial Intelligence: Building Critical Interdisciplinary Studies” (European Research Council grant agreement no. 101019164, 2021-26).

Media

Acknowledgements

  • The author would like to thank Magda Mayas and Xenia Pestova Bennett for their contributions to this project.

  • This work is supported by the ERC advanced grant “MusAI - Music and Artificial Intelligence: Building Critical Interdisciplinary Studies” (European Research Council grant agreement no. 101019164, 2021-26) and by ZKM Hertz Lab.

Comments
0
comment
No comments here
Why not start the discussion?