Skip to main content
SearchLoginLogin or Signup

WhaleBeat Karaoke

Live electronic+vocal performance

Published onAug 29, 2024
WhaleBeat Karaoke
·

Title

WhaleBeat Karaoke

https://aimc2024.pubpub.org/pub/rdnnb1ar/draft?access=yrqmae27

Project Description

We present WhaleBeat Karaoke, an experimental ambient music project with audience participation, aiming to create an evolving musical piece that involves a unique human/whale hybrid sonic interactive space. Inspired by challenging the conventional understanding and approaches of AI generative music, this 20-minute live performance seeks to explore the potential for an imagined shared musical language between both species, that defies the traditional human-centric approach to music-making.  We intend to make whale sounds understandable and manageable for human performance through technical exploration and the inclusion of live collaboration from the audience. WhaleBeat Karaoke provides an accessible entry point for individuals outside the music AI community to engage with machine learning-generated sounds, by inviting audience members to participate in the live performance. By incorporating these AI-related techniques to analyze and manipulate whale sounds, this piece also illustrates how AI can facilitate interdisciplinary research in creative ways. The live performance might also feature underwater footage projected, adding an immersive layer to the experience. The musical material for the live performance comprises two main layers of sounds: a preprocessed database of reconstructed whale songs, that are mixed in live performance to create the rhythmic and harmonic background for the ambient piece; and real-time processing of vocal performances from people in the audience.

Firstly, we employed machine learning approaches (such as NMF-Based Concatenative synthesis via the FluCoMa toolbox for Max/MSP) to rebuild ambient pads and beat-based samples from a database of humpback and sperm whale recordings (sourced from whale scientists or open access on the web). We also employed the DDSP library to build a model and facilitate the transfer of timbral qualities between the whale sounds and human musical renditions. For the creation of these set of sounds, we conducted experiments focused on adjusting temporal and timbral aspects of whale sounds, including segmentation, duration manipulation, silences removal, pitch shifting, and timbral transfer.

Secondly, in live performance, the piece shifts towards creating a participatory experience for individuals to engage with whale sounds in a karaoke-like format. The inexorable differences between cetaceans and humans—whether physiological, cognitive, or environmental—make it impossible for one species to directly access the sonic/musical realm of the other. Karaoke is introduced in the equation as a human phenomenon that can articulate a variety of cultural expression and identity through sound and performance. Through this experience, the cultural memory embedded in whale songs intertwines with human interpretations, leading to a reciprocal exchange where humans comprehend and perform “whale music.” The process involves loading curated whale sound files, segmenting, and looping them for engagement, and applying encoding processes like transposition and stretch factor adjustment for human imitation. Participants then imitate, impersonate, and perform these modified whale sounds. The resulting performances, termed "whalified" performances, are played back and recorded in real-time, offering a unique and interactive experience within the karaoke framework.

Type of submission

Performance 3 (club night at the Old Fire Station): ambient, beat-based music.

Technical/Stage Requirements

  • Table for 2 laptops

  • Power extension cord

  • Projector (if possible)

Program Notes

WhaleBeat Karaoke is an innovative musical project merging human and whale sounds into an experimental ambient piece with audience participation. It challenges conventional AI music generation by exploring a shared musical language between species, defying conventional human-centric approaches to music-making. Through machine learning techniques, whale sounds are transformed, reconstructed, and blended with live audience performance. The project employs different technical tools to manipulate and transfer timbral qualities between the sounds of both species. For this live performance, a karaoke-like format invites audience members to engage with selected whale sounds, imitating and performing them in real-time. An encoding/decoding system allows the audience’s rendition to be blended with the whale songs. By bridging human and cetacean sonic realms, WhaleBeat Karaoke showcases the potential for interdisciplinary research and creativity.

Media

  • Live Processing for the Karaoke Section

    Karaoke Patch for Max

Video Documentation

Acknowledgements

  • The authors would like to thank the Eco-Systemic Sound Lab team from the Experimental Music and Digital Media Program @ LSU: Treya Nash, Drew Farrar, Morgan Weeks, Ka Hei Cheng.

  • Thanks to Andrea Bonilla from the Center for Conservation Bioacoustics at Cornell University, who kindly provided humpback whale songs recorded in the Colombian Pacific Coast.

Comments
0
comment
No comments here
Why not start the discussion?