Skip to main content
SearchLoginLogin or Signup

Unformation

Music submission

Published onAug 29, 2024
Unformation
·

AIMC 2024 Music Submission Template

Title: Unformation

https://aimc2024.pubpub.org/pub/7uqaho9k/draft?access=lz1zm2ux

Project Description

This is a generative composition which produces sound material by classifying and responding to its own output, using machine listening and machine learning. The work shows an example of how AI techniques can be used creatively in the context of a computer music, noise and acousmatic composition. It is inspired by information theory and cybernetics.

The core sound material is real-time synthesised noise, which is conceptually thought of as ‘unformation’. However, ‘information’ is derived from this in the form of more shapely and stable sonorities that emerge over time. The work is written in SuperCollider, an environment for real-time audio synthesis and algorithmic composition. It uses Fluid Corpus Manipulation ToolKit for machine listening and machine learning.

A curious feature of the music is the gestural pacing which seems to be a side-effect of the onset-guided listening and response pattern.

Type of submission

The work is suitable for presentation in Performance 2 or Performance 3. It is performed by the composer, using a laptop and controllers. There are no instrument performers.

Technical/Stage Requirements

I play the work with my own laptop and controllers. I’ll bring my own audio interface.

Requirements

  • A table about the size of 1 x 2 m.

  • Stereo or four-channel speaker setup.

Program Notes

The work is a continuous run of a generative algorithm which regulates itself in the manner of a cybernetic system, as if deriving musical ‘information’ from noisy, stochastic ‘unformation’. Synthesis processes are arranged in a feedback network, each in a separate speaker channel, affecting its neighbour. The system listens to its own output and classifies the audio, making decisions which allow events to spontaneously contact, interrupt and influence one another, creating a fragmented gestural discourse. Traces of past events are captured in data that helps determine the progress of the music, producing some stability on the edge of a disorderly core. When performed live, the system itself is the main performer, while a human can steer the piece through adjusting the sensitivity and responsiveness of the system on a general level.

Acknowledgements and Ethics Statement

The work is not funded externally and I don’t believe that it presents any problematic societal, social or environmental impact. I have no conflicts of interest.

Comments
0
comment
No comments here
Why not start the discussion?