An Essay about Kenneth Gaburo from June

I haven't been posting for the past few months.  There's a lot to report, and I'll try to get some of the things on the website over the next few weeks.  

Last June 22, I was part of a symposium on the work of Kenneth Gaburo at Issue Project Room in Brooklyn.  Chris Mann, David Dunn and Larry Polansky were there in person, I was there via Skype.  As a preparation for that discussion, I wrote a little essay about Kenneth's ideas which I sent to all the participants beforehand.  Now, in October, on re-reading it, I think it might be interesting enough to share with people, and so you can download it here.



Three Found Object Microtonal (Virtual) Piano Pieces - and the joy(?) of quotation

Brian McLaren sent me the URL of a website with thousands of piano rolls converted to midi. I'm sure many people (at least those with access to microtonally retunable synthesizers and samplers) have done the party trick of taking a midi file of a pre-existing tune and playing it in a new tuning.  The fun is in hearing the shape of the original tune almost surrealistically stretched into the new tuning.  At times, however, certain segments of the tunes lose their original identity and become "abstract" - new tunes from old, as it were.
I selected a zip file called Ampico1 - because I remembered that people like Ravel had recorded for Ampico, and because I knew that Ampico rolls had a wide dynamic range, derived from the original performances. I took the midi files of Ampico1 and played them on the Modartt Pianoteq (virtual piano) with the Erard 1922 setting (a model of an early 20th century piano that was Ravel's favorite).  I put the piano into a microtonal scale that was 25 9/7 intervals stacked with 5/1 as the foldover point. (Those without microtonal knowledge or access to the Scala tuning program can just enjoy the sound of the scale without needing to worry about the technical details.)  At first, I made a collage of fragments from midi files several layers thick.  But that ended up sounding too much like a piece by Charles Ives.  The quotes, and the juxtapositions of different styles was just too obvious. The fragments of the music were mostly old chestnuts whose basic contour came through even with the microtonal tuning.  However, I did notice that fragments of the tunes, as I said, became abstract.  So I decided to try to create a piece made entirely of these "abstracted" moments, selecting areas of the midi files which seemed to not refer to the original tunes too closely, and stitched those midi files together into a continuity.  No matter what I did, though, the quotational idea still came through - it's amazing how little of something - two or three notes, half a measure of a recognizable rhythm -  needs to exist before it's heard as a quote.  At least by me, who knows the source tunes.   But at least when the quotes are stitched together one at a time, if there is a quote, it seems quickly erased, or commented on, by the next fragment.  Since the tune which opened and closed the piece was "Memories of You," following an old joke Joel Chadabe liked to tell and retell back in the 1970s, this piece is called:
Memories of You Two.  
Then I put the Pianoteq into a different tuning: A 14 note tuning based on a stack of the interval of 11 steps out of 31 note equal temperament.  I also set the tempo on the midi file player in the Pianoteq to .1, so the midi files would play VERY slowly.  By being played this slowly, any reference to the original tunes seemed to be eliminated.

I used an improvisatory process to generate this piece: I selected an arbitrary midi file, selected an arbitrary point in it, and then played the midi file for about 30 seconds.  After 30 seconds had gone by, I waited for the next note to begin, then I turned off the midi file.  This created a cut-off note at the end of the fragment.  I then selected another midi file, another arbitrary point in the midi file, and played about 30 seconds, and stopped after that, making sure I had a "cut-off" note at the end of the fragment.  I collected 3 tracks, each with 6 fragments on them in this way.  I then took each individual track, and cut off the "cut-off" note, making sure that the note before that went right up to the attack of the first note of the subsequent fragment.  This made the whole track have continuity - as if there had been a performance which instantly went from one fragment to the next.  I did this for all the tracks, and mixed all three tracks together.  Because the sun was going down while I did this, and there was a transition from day to night while I was working, I called this piece:

At Sunset.
The title still maintains the "late 19th century parlor music" feel of the original piano rolls, but 
the music doesn't (at least to my ears) have a quotational feel, although i think it does have a 
very pleasing collection of mostly dissonant harmonies.
While I was doing this, I was also involved in investigating how the "bias" and "immobile bias" options in Andrew Culver and John Cage's IC program worked.  This was a program made by Culver and Cage in the mid 1980s for use in Cage's composing work, and which has since been made freely available on the web.  With IC, one can generate lists of random numbers which have varying characteristics, ranging from a very close approximation of equally-weighted random (the normal option) to lists of numbers where the "dice are stacked," as it were, in various ways.  The "bias" option generates a list of numbers where certain numbers appear a LOT more than others for a brief time, changing at an unpredictable point to having a different group of numbers favored, which then changes to another set of numbers happening more than the others, etc.  The uneven weighting of the numbers, and when the weighting changes, both are also determined randomly.  This option is called "bias."  Another option, called "immobile bias" simply generates one random weighting of the numbers chosen, and sticks with that for the duration of the run.  In my looking into this, I was helped quite a bit by James Pritchett and Bill Brooks, both experts in matters Cagean, and I'd like to send them a big thanks for their insights into this.  And for those interested in more technical details, I prepared a small document (hardly a "paper") showing what I did when I looked at IC.  You can download it here.

So if the Ampico rolls provided me with found objects for the first two pieces, my use of the IC program, and its "bias" option would provide me with a found object for the next piece.  Wanting to make a little gift piece for Ben Johnston, I decided to use a 12-note scale made from harmonics 5, 11 and 17.  This would have familiar major chords, but also very dissonant chords as well.  (For the technically minded, it's an Euler Fokker Genus 5 11 17 17.)  Then I decided to actually refine the harmonic materials further, and just select a "diatonic-like" 7 note scale out of this.  I was wondering if the "changing bias" nature of the IC numbers would produce mostly one kind of chord at one time in the piece, and different chords being favored at different times in the piece, etc.  And I felt that a "diatonic-like" scale would be the best way to hear this quickly.  I decided to set up a gamut of 2 octaves of the diatonic-like scale - that is, 15 notes - for the IC numbers to control.

I generated 1000 numbers from 1 to 15 with a "moving bias", and converted that run of 1000 numbers to a data file that John Dunn's ArtWonk algorithmic composing program could use.  I then set up a patch where the same run of numbers was used to produce pitches, durations, and loudnesses for 5 voices - three making chords, and two making single notes.  Each parameter read the numbers beginning at a different point - all 4 numbers apart. (Pitch 1 begins at 0, Duration 1 begins at 4, Loudness 1 begins at 8, etc.)  I listened to the results, and after tweaking of ranges to put the chords into a nice register, tweaking of the overall tempo so there was a sense of "flow" etc to the rhythm, and a nice chorale resulted, which I call:

Chorale for Ben, with John's Moving Biases.


However, before I finished the piece, I sent an earlier version to Bill Brooks, to show him what I was doing with the stuff he had helped me with.  He emailed me back saying that he heard a quote from the Star Spangled Banner at the beginning of the piece!  Since the piece was made with random numbers, I was pretty amazed.  (And remember, my problem with the first movement was that it was sounding "too Ivesian.")  But I listened again, and yup, about 15 seconds in, it was very clear: "by the dawn's earl....." However, since sending the piece to Bill, I had made some changes to it - moved the register of the chords, changed the tempo, etc.  Did the final result still have SSB  quotes in it?  I listened, not too carefully, and sure enough, the quotes were still there! And not just at the beginning, but several places throughout the piece, as well!  And the quotes were not made by one part of the random number list having inadvertent quotes within them - no, the quotes resulted from the interplay of the different musical voices which were reading from different parts of the random number list simultaneously.  (Shades of Cabbalistic Gematria, etc!)  By this time, I can't listen to this piece without hearing the Star Spangled Banner peeking out from the tune, almost everywhere.  I'd just deepsix the whole project, (I mean, American foreign policy is so conflicted and negative these days, who wants to refer to that?) except for the delicious irony that the piece that has no intention of being quotational at all, and with material that is supposed to be "quotation proof" ends up being the most quotational-sounding one of the bunch.  So to share it with friends, maybe as anecdotal evidence (inadmissable in any court of scientific musico-theoretical inquiry,of course), you can hear it with the player above.


New Videos from the April 11, 2010 concert at The People's Culture Palace

On April 11, 2010, I gave a concert at The People's Culture Palace in Camperdown, NSW.  The event was hosted by Nick Shimmin and videoed by Graham Burchett, and hearty thanks for both of them for providing a wonderful place to play, and for great documentation.  

I was really happy with the results of the concert - it was a great venue and I thought I performed well.  I'm presenting here four videos, which give most of the music played at the  event.  In an effort to make what is basically laptop performing a bit more visually interesting, I'm wearing t-shirts specially chosen for the occasion.  For the first piece (which is divided in two because of YouTube's 10 minute video time limit) "Texan Stretches with Frequency Modulations Owls and Springboard" - which is a live musique concrete piece, I'm wearing my Xenakis fan-boy t-shirt from  For the next two pieces "E/Phi (Didn't Care) for microtonal string quartet samples, and "Experience of Marfa: A Book of Drones Number 5" (excerpt), I'm wearing a Phi equation t-shirt from

Texan Stretches with Frequency Modulations Owls and Springboard is an 18 minute performance for time-stretched samples (made with the freeware PaulStretch), a frequency modulation patch (in Vaz Modular), some sampled owls (on the Yamaha SU-10 mini-sampler) and an amplified board with springs, screws and sandpaper attached to it, which is processed through 2 hand-controlled effects units - the Korg miniKP and the Alesis Air F/X.  The video is divided in two - part 1 ends with a quote from an interview I did with Nicholas Slonimsky in 1980, and part 2 begins with the same quote.  

Part 1:

Part 2:

E/Phi (Didn't Care) 

is a piece of mathematical sonification, an example of taking things way too far in terms of mappings, and a good humored piece of Neo-Pythagorean modernist tunes (and tunings).  The first million digits of the mathematical constants e and Phi are used to pick pitches, rhythms, durations, timbres, tempo relations, and a few other aspects of the piece.  In addition, the scales in the piece are also based on e and Phi.  In live performance, I'm controlling the tempo of what's going on, conducting, as it were, the imaginary string quartet.

Experience of Marfa: A Book of Drones Number 5 

was written in 2007.  It's for 3 Arturia Moog software synths tuned in 25, 26, and 27 tones per octave.  The same chord progression is played on all 3 Moogs, using the Scala on-screen keyboards.  This means that the chords beat against each other at rates determined by the differences between the scales.  It's a drone piece that uses harmonies and tuning differences to assemble a progression of timbres, giving the listener a sound that they can "hear into," exploring sonic textures.  This video is only a 10 minute excerpt from a 50 minute piece, but it's probably long enough to give one the idea of what the piece is like.


STARE-LISTEN at Frankston Arts Centre 28 April - 7 June 2010

For those in the Melbourne area, if you're driving near Frankston at night, drive down Davey Street, past the Frankston Arts Centre, and have a look at my 1997 video work "Stare-Listen."  It's currently playing in the big glass cube at the front of the Arts Centre as part of their "Art After Dark" program.  I think the combination of the light and sound on the video, with the light and sound of the nighttime traffic in downtown Frankston is quite wonderful.  For those who aren't in the Melbourne area, this little documentary video should give you a flavour of what experiencing the work (mostly made in non-urban environments) in this urban environment looks, sounds, and feels like.  Thanks to Angela Lang and all the staff at Frankston Arts Centre who helped make this happen



Logos Robo-Jam Number 2 available for your listening and dancing pleasure!

It's become an annual event.  About once a year (well, this year and last), Kristof Lauwers organizes a world-wide interactive jam with the computer controlled acoustic instruments of the Logos Foundation in Gent, Belgium. (  Kristof designed the software which lets a number of people give commands to the robots and they play the music you specify.  With a number of people playing, some very interesting improvisations result.  This morning at 4 am (which was 8 pm Tuesday, in Gent Belgium), I participated in the jam with 9 other folks from around the world, simultaneously.   Here's the list of who was jamming, and where they were.  The results were heard by an audience in Gent, and now, by you.


The crew:
Kristof Lauwers, Yvan Vandersanden and Troy Rodgers at Logos Foundation, Gent, Belgium
Celio Vasconcelos at home in Aachen Germany
Scott Barton and Steve Kemper at the University of Virginia (Virginia Center for Computer Music) Charlottesville, Virginia USA
Jaime Reis in Linda-a-Velha, Portugal, west of Lisbon, - with students in his Acoustics class at the Conservatorio de Musica de Linda-a-Velha, Escola de Musica Nossa Senhora de Cabo.
Warren Burt at home in Wollongong Australia
Simon Halsberghe - in a lazy armchair in Antwerp Belgium
Brent Wetters - at home  in Providence, Rhode Island, USA
Juan Sebastian Lach Lau - at home in Morelia, Michoacan, Mexico


Here's the results of last night's performance.  This is the recording made in the Logos studio, during  the concert.  It's a lot higher fidelity than what I was hearing over the Skype connection during the performance, and I'm delighted at the details in some of the sound combinations.   Enjoy!  It's 18 minutes and 33 seconds long.

 Update 4 May 2010:  And here are some photos of Jaime Reis and his students in the Escola de Musica Nossa Senhora de Cabo where he was performing his contribution.  The photos of Jaime are by Rita Cordeiro, and Jaime took the one of his students.  Looking at the photo of the students, you can see the video projector and the chat room interface that we performed with.  The photo of Jaime with the wall projection shows what the students were looking at  - the chat room interface projected.  They also heard the performance that was happening in Gent in the classroom at the same time.  If you want to see pictures of the Logos instruments themselves, go to and look up "Musical Robots."

And one more picture - this is a screen-grab of the chat-room interface devised by Kristof Lauwers that we performed with.  To the left is a list of commands.  At the bottom of the screen are the riffs people are playing.  At the upper right is the list of currently playing instruments.  In the middle is the chat itself, with the robot's responses in red, and all the other players named.