It’s been a while since I’ve posted things.  As usual, I’ve been busy.  But I’ve been preparing things for the website – it was just a matter of robbing enough time from other commitments in order to be able to post things.   Here are 5 new posts, with 5 new pieces for your reading and listening pleasure.

Jacques Soddell has been organizing the Undue Noise performance series in Bendigo and Castlemaine for the past several years.  When we moved to this area, he soon got into contact with us, and asked if we’d like to be involved.  We (Catherine and I) were delighted to be asked, but my very demanding teaching and commuting schedule worked against us doing things, so far.  Then in early August, Jacques asked us if we’d like to be part of an improvisation evening he was organizing at the Old Fire Station, which is a lovely Black Box theatre in, not surprisingly, the Old Fire Station next to the Capital Theatre and the Bendigo Art Gallery.  The timing was perfect – I am currently working at Bendigo TAFE on Saturdays, and finish work at 5pm.  The concert was at 8 pm.  There would be time to set up and have a nice meal before performing. 

Catherine and I decided to do a performance with her Sruti Boxes, and my Netbook, using similar materials to our online performance last November.

Catherine has 6 Sruti Boxes, Indian drone harmoniums.  There are three pairs, one each tuned to a B, C and C# fundamental, with each pair having a slightly different tuning.  These were custom made for her.  I was using my netbook using the Cakewalk Dimension Pro synthesizer with tones I’d made myself, where the harmonics of the sounds were tuned to the sub-harmonic series, using prime numbered sub-harmonics from 17 on down.  Four different versions of each timbre were made (using the additive synthesis features in Cool Edit Pro), and in performance, using two sliders on my Korg NanoKontrol, I can fade between each of these timbres, making any combination of them.  This made tones where the spectrum was a bit unstable and dissonant, but always changing.  I only played these tones at 12 different pitch levels, which again, were the 12 prime numbered sub-harmonics starting on 17.

In performance, Catherine is continually changing which Sruti Boxes she is using, and which pitches are playing on which Box.  I’m slowly changing which pitch or pitches I’m playing (pitches are triggered off by on-off buttons, not by a keyboard), adjusting the timbre, and also changing the overall volume, adjusting the balance between Srutis and Electronics.  

The performance went very well.  We, and the audience, were very pleased.  Jacques had recorded our performance, and on listening back, we thought it was good enough to share with friends in web-land.  On listening back, Catherine said that the cross fading of different harmonic textures that happens continually in the piece was similar to the idea that animated her making her 30 meter long graphic score, “Blue Line.” For those of you unfamiliar with that score, here are a few photos from the 2009 performance of it with Speak Percussion (Eugene Ughetti, Matthias Schack-Arnott, and Leah Scholes) as part of the “Catherine Schieve: Graphic Music” concert at the Melbourne Recital Centre.

(Photo credits: Siri Hayes, Catherine Schieve)

 So our piece for Sruti Boxes and Electronics is now called “The Idea of Blue Line.” Here’s the recording of it, for streaming and downloading, in mp3 and ogg.  Many thanks to Jacques Soddell for inviting us to play, making the recording, and then sending it to us so promptly.  Enjoy.

Download the piece in MP3 format HERE.

Download the piece in OGG format HERE.



Of late, I’ve been seeing patterns in the world around me which I’ve thought would make good scores for graphics to sound conversion.  A number of those pieces are documented in this blog, most recently “Berries”, Mike Cooper’s Shirt, and a graphics and sound piece for Kenneth Gaburo.  Well, this is clearly getting out of hand.  I mean, I'm now seeing good music patterns just about everywhere.  About two weeks ago, I was walking from the train to Bendigo TAFE, my other employer, and just across the street from the campus, there were some gravel patches next to the sidewalk.  In the morning light they looked quite appealing, so out came the cell phone, and I took a couple of shots.  About a week passed before I could finally begin to find out if the pictures of gravel had any potential to make a sound score.

As you can see, the picture is fairly uniform, but the variety of shapes is quite attractive.  I thought that perhaps this might make a texture of little grains of noise – perhaps a noise-scape, to contrast with the sine-wave pseudo-additive-synthesis sounds of “Berries.”  The first step, as always, was to get some black in the background.  Three different treatments were made, and I tried converting all of them to sound.

Visually, I liked the first treatment best.  To do the conversion, I was using Coagula, which uses colours from Red-Green to determine position in stereo space for each sound, and Blue for the amount of band-limited noise to have in each sound.  No Blue = Sine Wave; All Blue = All Noise.  But when I converted the first picture to sound, it made a fairly unrelenting, undifferentiated noiseband.  The third treatment was more promising, and is also fairly visually appealing, but it too seemed to make sound that, while more differentiated than the first picture, was also very heavily weighted to being “just” a noiseband.  The second treatment, although not as visually appealing as the other two, produced a much wider variety of sound-type – starting with a mix of tones, burbles, and small noises at the start, through to rushing noises dominating the middle, and settling down to a mix of sound types near the end.  

Settling on a duration for the realization was an interesting quest.  When the picture was realized as a 30 second burst of sound, it was mostly a burbling texture:

Stretching the duration to 5 minutes produced a more differentiated texture, but the progression of sound types seemed too rapid and the rhythm too rigid:

With the duration set to 10 minutes, the speed of reading the individual pixels became almost a pulse oriented beat.  My dance-music colleagues might find this one useful, but I didn’t.

A duration of 30 minutes seemed to slow things down to the point where individual textures and noises could be appreciated and even savoured.  But over the course of 30 minutes, the rhythm, for obvious reasons, began to appear a bit “samey.”

I then made a 25 minute version, which was just a little bit faster (6/5ths faster, if you want to be technical).  Mixing the two versions together made a texture that was too busy, but cross fading from one version to the other produced a very pleasing sense of the texture getting faster and slower in the long time-scale, while still giving the variety of sound-types I found appealing, and it also preserved the dramatic sweep of the piece from a mix of small sounds and noises to a roaring noise-band, and then back again. 

The problem with doing this was that what at first seemed like a whim – “That looks neat – let’s photograph that and see what it sounds like!” -  turned into a many hours long task of listening to sound after sound, again and again, until finally arriving at what I think is a good sound-structure.  Most of this listening took place late at night, under headphones.  I think I slept through a lot of it.  Sub-conscious perception, anyone? 

In any case, here’s the result – a companion piece for “Berries.”  This one is called “Gravel.”  The two of them together fill an hour – they might make a good concert, or an installation, if I can ever find the time to organize an event like that.  Meanwhile, in the world of streaming audio and downloads, you can download both, or listen to both on line, and make your own situation for listening to them. 

As usual, you can stream the piece below, or download it in mp3 or ogg (higher fidelity) formats.

Download the piece in MP3 format HERE.

Download the piece in OGG format HERE.




Back around March, when I learned that the theme of this year’s Australasian Computer Music Conference, to be held in Auckland in July, was “Organicism in Electro-acoustic Music,” I decided to make a piece with all bird, or bird-like, sounds.  I was involved in the ongoing beta-testing of Richard Orton and Archer Endrich's Process Pack, so I decided to start with bird song and see what sounds I could get with that software.  Looking through my collection of bird samples, I chose three Australian birds (Magpie, Tawny Frogmouth (which I had recorded outside our window when we lived in Kanahooka NSW), and Rainbow Lorikeet), two Brazilian birds (Uirapuru, Toucan) and one Antarctic bird (the Emperor Penguin).

I chose the bird samples pretty quickly – I wasn't too particular about which birds I used, but I quickly realised I wanted a sound with more bass or depth than most birds.  Even the Emperor Penguin didn't have enough of that for me.  Where, I wondered, could I get a recording of a BIG bird?  Besides Sesame Street, that is.  

I remembered that back in 2002, when I was in Urbana, Illinois, Anthony Ptak and I had made a fun trip up to Chicago to the Field Museum to record their Parasauralophus simulation.  The Parasauralophus was the Cretaceous dinosaur with the long crest on the back of its head.  Examinations of the skeletons have shown how their breathing mechanism extended up, thorough and around their crest.  Their vocal track was several meters long.  The Field Museum had constructed a pair of “lungs” that you could squeeze, and the pressure from those went through curved pipe of the same length and diameter as the vocal tract of one of the skeletons.  Depending on how you squeezed this, you could get anything from gutteral grunts to extended sliding wails.  Since current thought is that these were pack animals who used sound for communication, the Cretaceous must have been a very lively and noisy place.  In our time at the museum, we recorded about 20 minutes of different kinds of dino sounds. 

I don't know what Anthony did with his samples, but I used mine later that year in a performance in Albany, NY, with performance poets Lori Anderson Moseman and Druis Beasley, entitled “Bog Girl and Mud Womyn.”  Here are some links to their current websites and work:



Pictured: Lori Anderson Moseman (top), Druis Beasley (middle), Perry Parasauralophus, who followed me home from the Field Museum and has been cheering up the place ever since (bottom).

So back to the sample vault I went.  The Parasauralophus sounds were indeed very good material, and so one of those, along with the other six bird sounds were the source materials.  Four of the resources of Process Pack were used on the original sounds: Filter Bank, Hover, Pyramid, and Wraith.  I used Filterbank to create suspended chords with the original sounds softly present underneath them.  With Hover, I drew all the “control curves” used in the process by hand,  fragmenting the original sounds in ways that sometimes resembled the original sounds, and sometimes were quite abstracted.  Pyramid stacked the Hover sounds into chords of the same sample played at many different speeds.  Wraith extracted only a few harmonics out of the spectrum of the treated sounds.  Additionally, I used PaulStretch to time-stretch the Wraith sounds.  The Filterbank and Wraith sounds were smooth and pitch oriented, while the Hover and Pyramid sounds were noisy and agitatedly textured. 

With 7 original bird calls, (assuming that a dinosaur, even a virtual one, is a bird relative), and 4 processes, this gave me a vocabulary of 28 sounds to work with.  To play these, I used the same Plogue-Bidule sound-mixing patch I'd developed last year for “Texan Stretches” but changed the transposition possibilities for the sounds as I was mixing them. There were four different sample players.  Each had all 28 samples available.  As the transposition of each sample was different on each sample player, any of the samples could be played in four different versions at once, making chords and polyrhythms drawn from five different pitch possibilities (the original and four different transpositions).

For the transposition pitches, I used a scale that Jacky Ligon had sent me – a non-octave Pythagorean-type scale in which phi was the generator (1.618/1 = approximately 833.09 cents) and in which phi raised to the power of phi (2.178/1 = approximately 1347.968 cents) was the period, or fold-over point.

(Technical tuning note: In a normal Pythagorean scale, you stack up copies of a single interval (in this case 833.09 cents), and if the resulting interval is more than an octave, you lower the resulting pitch an octave.  In this scale, instead of “folding over” the intervals at an octave (1200 cents), we fold them over at 1347.968 cents.  Scales of 5, 8, 13, and 21 notes made in this way exhibit Moment of Symmetry properties. If anyone wants a further explanation, they should write me directly with the Contact form on this website.  If enough people contact me, I’ll write a small blog post explaining the matter more thoroughly.).

The original sounds already had a sense of pitch about them – the scale was used to make further transpositions of these sounds.  With the addition of these transposition possibilities, I now had far more sound resources than I could possibly mix and play with in any individual performance.  Since I value unpredictability in performance, this meant that each performance, even if it followed the same general form, would be different. 

I envisioned each performance as being around 10 minutes long, and the first performance, at Box Hill Institute, on a Faculty afternoon recital in May, was about that length.  In later rehearsals in my studio, the length of a performance seemed to stretch out to 12 minutes, and that was also the duration of the performance I gave (the first one which incorporated the Phi-scale transpositions) at the Australasian Computer Music Conference at the University of Auckland, on July 6.  (That performance was on a lunchtime concert.  This piece seems to be evolving as a mid-day raga!)

Finally, in early August, I sat down to make a good studio recording of the piece.  I decided that rather than adopt the strategies I'd used for making shorter performances at Box Hill and Auckland, I'd just play away, letting the sounds take their own time, finding out how long that process would be.  The completed recording was 23:40, and when I listened back, I was delighted with the pace of the performing.  Now the sounds seemed to breathe.  The progression from smooth pitched sounds to noisy textures and back to pitch didn't seem forced to me, either.  I enjoyed hearing different families of sounds (modified magpies, for example) as the appeared and reappeared in the piece in different guises.  

The performance at the University of Auckland was also videoed. That is now available at this address. For those who want to hear the longer one, here's the 23:40 version of “The Bird is the Word,” in streaming form, and downloadable in mp3 and ogg (higher fidelity) formats.  

Download the piece in MP3 format HERE.

Download the piece in OGG format HERE.




I recently read Kenneth Gaburo's classic 1970 essay, “The Beauty of Irrelevant Music in some of my classes.  In the essay, there is a line, “and what if these graphics had something to do with my composition?” and “and what if my composition had something to do with these graphics?”  These lines refer to the computer graphics by Herbert Brun that were used in the 1973 performance at The Center for Music Experiment at UCSD.  I've performed this essay a number of times in the past 30 years, always using copies of the original Brun slides.  However, as readers of this blog know, we moved at the beginning of this year from Wollongong to Victoria, and on moving, I plunged immediately into a demanding and time-consuming teaching schedule.  Consequently most of my archive is still in storage, packed away in boxes.  This includes the Brun slides.  Faced with this problem, I decided to make my own graphics “in the style and spirit” of the Brun originals.  I've been working with ArtWonk for quite a while now, on a project to algorithmically generate computer graphics which I will then use as spectrographs in graphics-to-sound conversion programs.  I decided to use this program to generate the images I needed.  After a little while, I made a patch which generated repeating variants on hexagons with randomly determined corners.  It sounds complex, but it isn't – it generates shapes which look like this:

There are 20 sections in Gaburo's essay, each of which should have a different graphic with it.  So I generated 20 graphics.  These look somewhat like Brun's original graphics, but are not as complex , and are in colour rather than black and white.  Still, I was following Brun's instruction to “visualise the process by which one would have liked to generate these graphics and then make a composition with that process.”  I then thought that since I was normally listening to the output of the graphics generating program as sound, I should do the same with these.  I realised each graphic as a little “burst” of sound lasting either 1, 2, 3, 5, or 8 seconds, using Rasmus Eckman's Coagula, which would realise the colours as a mix of sine waves and band limited noise.  It then occurred to me that I might play these sounds at the end of each paragraph of the essay, giving the ears a slight “change of focus” between each text.  It's not in Gaburo's original score, but I thought it was worth a try, and in performance, it worked very well.

I then tried pasting the 20 sounds together, end to end to hear what would happen.  The results were uninspiring.  On their own, as bursts of sound between text, the mix of sines and noise worked just fine.  But put together as a sequence, the result was timbrally disappointing, and the rhythm of change felt a bit limp.  I might have just abandoned things there, but I'd been telling students about waveshaping recently, and I thought that I'd try putting the sounds through the Melda Production Waveshaper, which I recently bought, and with which I was delighted.

I tried processing one of the sounds through the waveshaper, and I liked the way it sharpened the noise, while still retaining the contours of the original gesture.  If I designed my own highly idiosyncratic waveshaping curves, I thought, I could give the sounds a lot more sparkle and bite.  Why not make 20 different curves, one for each sound?  And why not add reverb to each sound?  But instead of a normal reverb, why not use a convolution reverb, with a different impulse response for each sound, thus putting each sound into a distinctive virtual environment? 

I'm a great fan of making patterns and seeing/hearing what happens with them.  Often, if I can't predict the results of an action or a pattern, I'll make a lot of them, and then observe the results.  That was the case here.  I made waveshaping curves that looked pretty counter-intuitive, and I selected impulse responses at random from my collection acquired over the years.  (For those of you unfamiliar with convolution reverb, it's a technique where a short loud sound (the impulse) is recorded in an environment.  Your desired sound is then convolved (combined by multiplication) with the impulse, and the result sounds as if the original were recorded in the environment of the impulse.  The impulse is usually a short burst of white noise, or a clap of the hands, but it can be anything.  For example, if you recorded an announcement in a very reverberant train station, and then cut one word out of that announcement, you could get sounds that sounded like they were played in the train station, but coloured by the vowels of the word from the announcement.

With 20 sounds, each with their own waveshaping, and each played in a different (simulated)  environment, I thought we might be on to something.  And we were.  When these sounds were assembled end to end, the space created by the reverb tails gave the sequence a bit of breathing space, and made a rhythm of succession that seemed both more spacious and yet tighter than the original.  I was tempted to assemble the drawings into a video to combine with the sounds, but the widely varying levels of brightness, and the thinness of the lines did not lend themselves to use in a low-res (YouTube like) video.  So the piece is sound only.  Although here are thumbnails of 5 more of the drawings.

And here's the sound piece, followed by download links for mp3 and ogg (higher fidelity) formats.

Dowload the MP3 format version of the piece HERE.

Download the OGG format version of the piece HERE.



The ever amazing Mike Cooper sent me an email the other day, expressing his appreciation for my graphic-to-sound composition Berries.  He sent along a photo of a Hawaiian shirt of his, wondering what it would sound like if used as a score in the manner of the Berries photos.  Looking at the photo, I realised that it would have to be treated, because all my graphic-to-sound programs use black to indicate silence, and Mike's shirt photo had no black – the picture would be interpreted as fairly dense undifferentiated noise in its untreated state.  So I processed the photo through the free image program GIMP.  I adjusted the colour curve to be almost all black, and only allowed one small area of the image to come through as other colours.  I may have been a bit too severe in my truncating of the colours because, as you can see, only a little bit of the original image survived.  Further, since the original photo was pretty low-res (being only 640 x 480 pixels) the result was fairly grainy.  However, the manipulated image still looked like it was fairly complex, so I decided to use it, and hear what resulted. 

Mike Cooper's Shirt before preparation for graphic synthesis.

Mike Cooper's shirt after preparation for graphic synthesis.

Realised with just sine waves, the result was pretty uninspiring.  A bit of twinkle, but that was about it.  It then occurred to me that a shirt belonging to a virtuoso guitar and ukulele player like Mike should probably be realised with ukulele or guitar samples instead of sine waves.  In my sample collection I have an ancient sample called “Bermuda.”  I believe the sample is from around 1988-90, so long ago that I can't remember where its from, or who is playing on it.  The sample is of a man playing a guitar or ukulele, singing, and stomping his foot in time with his playing.  It's very cheery.  Here's the original sample:

"Bermuda" - original sample

I made two very short samples from this, one consisting of the first strummed chords, the second consisting of just three descending notes with a bit of foot stomping thrown in.  Here are the two samples. 

"Uke 1" - sample 1

"Uke 2" - sample 2

Nicholas Fournel's AudioPaint allows you to use any sample to realise a graphic, not just sine waves or noise. So I realised the processed photo into sound with both samples, producing two short sound files, which I then sent on to Mike. (For those of you who like tech things – Stereo Space was controlled by Red/Green colours; the Duration was either 120 or 180 seconds; and the Frequency Range was from 50 – 12500 Hz; with a Logarithmic Pitch Spread.)

Mike liked the sound of these realisations, and I agreed – the use of the vastly stripped back photo with the guitar/uke samples did indeed produce a fairly exciting sound texture.   I thought that was the end of the thing, but my mind wouldn't rest.  What if we played the photo horizontally instead of vertically?  What if we turned it upside down?  What about upside down and backwards?  You know, all the usual serialist schticks.

I realised that if I was going to do the serialist thing though, I'd actually have 8 different versions of the picture – the original and the original flipped left-right; upside down and upside down flipped left-right, and the same 4 alterations for the photo flipped horizontally.  However, I decided to be loyal to “the tradition” (or I was simply lazy), and only made 4 versions, one each for the standard original, and it upside down; and the horizontal left to right, and right-to-left forms of the photograph. 

Two of these pics used the first Uke sample, the other two used the second.  I then had four sound files, of different lengths, and I decided on an order for them, which included substantial overlapping, and mixed them together.  The result is this little eight minute piece, which I'm quite happy with.  It seems to have lots of variety, and the textures change in a really exciting manner.  The combination of pitch range, register, the choice of samples, the density of the image, and the durations chosen for each sound file all seem to combine to make an intricate and pleasingly evolving sound texture.  The mixing of the four versions enhances this sense of progression as well.  Here's the finished piece, with downloads available in both mp3 and ogg formats.  I hope you enjoy it.

Download the piece in MP3 format HERE.

Download the piece in OGG format HERE.