« Two Issues Worth of Soundbytes Articles! | Main | Two upcoming events on the same night at different ends of the planet. »
Friday
May122017

Warren Burt performance at Box Hill Institute May 8 2017

This year, my main composing effort has been on a large scale project called “Mosaics and Transparencies.”  This project has involved making families of samples, which use different techniques in their making, and then assembling these samples into larger scale structures, some premeditated, some spontaneous.  This started off in January with making a series of musique concrete samples using Pauline Oliveros’s “Applebox” idea, and continued with a series of melodies using non-Western instrument samples controlled by Markov Chains and physical modelling processes.  The third phase has involved taking a series of several hundred drawings made since 2006, and converting them into sound.  I’ve recently been improvising with these, and this performance is one of these improvs.  In this performance, I not only mix and modify these samples, I also am performing various piano-sound processes, making pitch-oriented textures to contrast with the “noiseband”’ oriented sounds of the converted drawings.

This performance was done as part of the Departmental Forum series at Box Hill Institute, which I’ve been organizing.  I’m using my collection of small-scale digital devices here – six are used, and mixed in performance.  There are three tablet computers; an ASUS PC, an iPad and a Samsung Android tablet, and three cellphones, an iPhone 4, an iPhone 6s, and a Sony Experia Android phone.   The ASUS PC is performing an algorithmic process controlling a sampled piano tuned in a very odd microtonal scale (the 8th root of 2.1).  Additionally, I have a small keyboard connected to the ASUS tablet, so that I can also perform the piano sounds live in a more “traditional” manner.  The iOS devices all use either Audiobus 2 or 3, assembling chains of apps, which either play the samples, or various pitch-oriented “keyboard-sound” processes.  The Android devices either play the samples or the original drawings being converted into sound in real-time; or play simple “theremin-like” sequences as a further element being thrown into the mix.  I’ve been doing improvisations where I combine piano-like sequences and performing with more complex samplings for the past couple of years, and I wanted to continue exploring this.  I also wanted to put myself in a situation where I had an abundance of resources, and then improvise a “sound combine” using these, using my intuition to shape the larger continuity of sounds.  In this particular performance, I also got into repetition a bit, something I don’t normally do much, but the didactic situation of the performance – a Forum on Performance for all the undergraduate music students as Box Hill Institute – seemed to encourage this.  So some sounds are repeated immediately, some motives are repeated, and some larger sections come back, all of which were decided on the impulse of the moment.  A list of the software used in the piece follows.  This is not meant to be either impressive or alienating, but simply to give those who are experienced with the software some indications of what I did. 

Following the performance, a lively Q&A session ensued (a number of students expressed concern that I wasn’t performing in an easily identifiable genre, and a lively discussion of the concept and usefulness of that idea took place), and I continued to receive favourable comments on the piece from people for the next few days.  On looking at the video, I was delighted with the piece – I think I accomplished most of what I set out to do, and I really liked the “yearning, striving” (to make perhaps a reference to one of my inspirations, Dane Rudhyar) quality of the piano-sound textures, as well as the not-so-oblique references to Mr. Monk and his Criss-Crossing.  The “drawing to sound” textures provide a great timbral variety, from pure waves to noise-bands, and I was delighted with the number of ways I was able to shape those in real-time.  I hope you enjoy the piece – I did – both in the moment of performance, and then, happily, after the fact as well.

May 12, 2017 – WB

 

 

Software and hardware used in this performance:

ASUS VivoTab Win8: MusicWonk; AudioMulch; Garritan Piano Samples; Scala

iPad4: Audiobus3; Virtual ANS; Turnado; Enumero; Johnny; Fugue Machine; midiFILTr-PG; Yamaha: FM Essential; Launchpad; Muckraker; Nebulizer

iPhone6s: MF Motion; midiFILTr-PG; Yamaha: FM Essential; Crystalline; Musix Pro; Thumbjam; Altispace Reverb

iPhone4: Audiobus2; Launchpad; Muckraker; Nebulizer

Samsung Galaxy 7 Android tablet: Virtual ANS

SONY Experia Android Phone: Virtual ANS; Saucillator

Roland Octa-Capture Sound Card

Samples in Launchpad and Virtual ANS were made with Kaleidoscope and Audacity on a Windows PC.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend