Wednesday, November 14, 2007

OnePlusTwo

















These are images taken from a recent performance at OnePlusTwo, Tue 13th Nov, at Horse Bazaar.

As third years at RMIT are preparing to showcase their final works, Marcia Jane and I decided to fill the gaping void occupied by kids in first and second years who still wanted to show their work(s).

Thankfully, the night went relatively smoothly and all that. I don't think anybody noticed the bumps... But what I'm really excited about is the public debut of my video sampler, built for Live AudioVision.

It takes live video and stores it into a buffer from where it can be called back to the screen. This means that it is possible to have four separate videos (or indeed 4 x the same video), playing back at the same time, over the top of each other. The effect, not quite shown terrifically here, is quite strange. The experience is repetitive and mesmerising. There are structures within the loops, both sound and image, but the reference points to eachother are constantly changing.

The most exciting thing about this performance is that, i think, it is moving towards a truly improvised collaboration between sound and image. All the ingredients of a "traditional" improvised performance are present, but at the same time so is the video camera, which is recording the performance and reporforming elements...

So there is potential. I still need to iron out a few bugs from the patch. Its not quite as reliable as i would like. At the beginning of the performance there was a 10 minute silent patch were i couldn't get any sound (I'm blaming digidesign) and recording sound and image does not have a high success rate yet, about it comes of about 60%-70% of the time. It really needs to improve before I will be completely happy with it. Especially since with video, the best moments seem to happen spontaneously without warning. You don't want to miss them.

Big thanks to Rosalind Hall for being such a willing collaborator, and to Marcia Jane for being such an instigator.



The patch in use!

Monday, November 12, 2007

Firefly

Screen shots from firefly






































Firefly was the second assignment due this semester for Live Audio Vision. Its built on a random access sampler which you feed by dropping a folder onto a drop zone. I wanted it to be easily to use and relatively rewarding, even if you don't know much about whats happening.

On its "automatic" mode, Firefly will randomly choose files, loop points, playback speeds and the rate at which it chooses samples. On its not automatic mode, you can do it all yourself, including scrubbing audio and re sizing loop points.

The output of this is then sent to an oscilloscope type thing, which is then mapped onto openGl shapes.

Hopefully some video to follow soon (I'm having some trouble getting the final window to render to a video matrix that jitter will be able to understand - any suggestions?)