Skip to main content
SearchLoginLogin or Signup

GoGo Musebots

Generative music performance video of jazz trio and robotic instruments

Published onAug 29, 2024
GoGo Musebots
·
history

You're viewing an older Release (#1) of this Pub.

  • This Release (#1) was created on Aug 29, 2024 ()
  • The latest Release (#2) was created on Sep 05, 2024 ().

Preface

A few summers ago, I attended a concert of the UK’s “emotive, cinematic break-beat trio” GoGo Penguin and I thought “I bet my musebots could do something like that”. Making some alterations to an existing system that produces ambient music, the musebots came up with this result: generative music for jazz trio with robotic drums and Disklavier.

Four tunes, performed by John Korsrud, trumpet; Jon Bentley, saxophones; and James Meger, bass and mechanical instruments. Most of the music is composed by musebots: the selective improvised portions should be evident.

System Description

GoGo Musebots is a co-creation between a generative system and its creator, as well as three improvising musicians. The system is routed in composition rather than improvisation, in that plans are created, then filled in by musical agents (musebots) by creating a score; musebots can edit their individual parts, making decisions based on global structures and local events by other musebots. The final score translated into MIDI information – to be performed by robotic instruments – and lead-sheet notation – to be performed by humans.

All titles are generated by algorithm, selecting word combinations from Samual Beckett’s The Unnameable.

A note on curation of output: the four movements, below, were generated in January 2024 for an event on March 1; five movements were generated, and four were selected. All five were possible, but time constraints only allowed four to be performed.

Submission Details

As it is impractical to transport a Disklavier and robotic drummer, as well as three musicians, to Oxford for this event, I propose to show a music video of the four works.

Please note that the videos below are not the videos that I propose to show. We will undertake a video shoot in May 2024 to create a TinyDesk-type home video with soft lighting, blurry backgrounds, and multi-angle views (kind of like this).

Video documentation, below, is from the premiere performance March 1, 2024. Because the robotic instruments are playing pre-generated material, I’m listening to a click and waving my arms about trying to keep everyone in sync.

Then I Was In

Clearing Prior to Empty

Little Grey Wizened Pear

No Worse Little Bounds

Ethics Statement

Possible conflict of interest with Philippe Pasquier, who was my collaborator for ten years (2008-2018). This research was funded by the Social Sciences Humanities Research Council of Canada and Simon Fraser University; no ethics approval was required. The only societal impact of my work would be to underline the importance of artists and humans maintaining their involvement in AI-generated artworks.

Comments
0
comment
No comments here
Why not start the discussion?