video audio scores libretti research

 

Renga for White Noise


for human and AI


2024 CNMAT

The live premiere of Renga for White Noise was held at the Center for New Music and Audio Technologies (CNMAT) of the University of California, Berkeley.

Two improvisers—one human, one AI—co-create a linked sequence of 30 and 20-second-long phrases respectively. Two topics (timbre or rhythm) are explored within an algorithmic composition framework mapped to the principles of Japanese linked-verse poetry across impressiveness 1-4 and relatedness 1-4 measures, where an equal distribution between the two topics is reached. By learning the dial-based movements of the human's phrases on the Ableton Push 2, the AI produces gestures including sounds that are impossible to be performed by human hands.

The work is intended to evoke the Zen experience of moment-to-moment awareness in renga and is inspired by Kawabata Yasunari’s transmediation of renga principles into his short stories and novels. The filter-based processing of white noise is contextualized with respect to the white noise music of Jōji Yuasa from the 1960s alongside works by Merzbow. Being composed of the full spectrum of frequencies, as in Yuasa's work, white noise is likewise chosen here for manifesting the Buddhist principle of “many in one and one in many”.

Developed in collaboration with Kurt Mikolajczyk using Max and Javascript, the telematic premiere will be held with Kurt at the Australasian Computer Music Conference with three improvisers — two humans, one AI — between Charlotte Amalie and Sydney.

 

 


© 2024 Austin Oting Har. All Rights Reserved.