I’m only here to confess, this is a ‘blog’ biting the dust: not an update posted in several months despite the author’s dreams of continuous social comment, philosophising and posting new creative content on a daily basis. The truth is I’m more of a static site than a blog, and I’ve decided that’s nothing to be ashamed of. So head over to the page links on the right.
Just one quick update. I’m now at the Design Lab at the University of Sydney, an enlightened place of multidisciplinary creativity and inventive activity, with some fantastic researchers in design thinking, interaction design, robotics, and myself, bringing a sonic dimension.
The Sonic Ecosystem project has been taking up quite a bit of my time recently. It has been running in my absence at the Cube 37 gallery in Frankston, near Melbourne. It ran last week at Shunt, the dark damp stony vaults underneath London Bridge Station, which are a perfect setting for sonic artwork. I hoped to finally get some photos of the work in situ at Shunt, but they didn’t materialise. This week I am in Montreal at ICMC 2009 where the installation is one of only four installation pieces at the conference, this time running without the visuals. It has received a lot of positive feedback. Some people caught some z’s in it’s immersive sound cocoon.
Below is an updated video of the piece in action. Each line represents an agent living in the environment, an agent corresponds to a sound in the soundscape. Essentially, the principle of the piece is that sounds evolve over time so as to find compatible ways to coinhabit an environment with finite resources.
The height of the line represents their energy, which is determined by a simple (but not-so-simple because it has been hand tweaked) resource model based on a relation between the sound made by the agent and the collective sound of the installation environment (see below). The line goes red when the agent’s energy is sufficient for it to reproduce. Sometimes agents’ energies drop below zero, at which point they will die with a certain probability (also meaning that they might get lucky and make it back from the brink). The width of the line and also the dots at the bottom of the screen show the energy of the sound they are producing. The numbers represent the ID of the sound file each agent is playing, which is genetically determined. This means that you can spot evolutionary lineages as groups of agents with the same ID, or similar IDs. All of these agents will also make similar sounds, so a populous species will produce many layers of the same overlapping sound. The rotating discs represent the activations of the neural net used by each agent, which maps the features of the input sound (of the environment) to controller value that modify the sound produced by the agents (sample position, granular size, rate and randomness). The behaviour of the nets is also genetically determined, and means that agents can develop adaptive behaviour over time.
The Resource Model
Inspired by the seminal work of di Scipio, “sound is the interface”. This model places the agents in a biofeedback loop with the sound they produce, so not the literal instantaneous audio feedback that di Scipio explores, but a very indirect feedback process, best described as a loosely coupled system. Agents’ fitness is not defined by a strict fitness function but by a resource model that determines the health of agents. Given a coarse spectrogram of the sound, an agent gains health by making sound in each frequency bin with the following rules: the more overall sound in that bin, the less available health, the available health is divided between agents in the ratio in which each agents contributes to the sound in that bin. I’ve been playing with variations on this rule. A couple of hacks were necessary to avoid certain horrendous sounds: it was necessary to punish too much broad noise, and also helped to limit the energy gain to just the best band for each agent, rather than the sum over all bands.
Health also decays at a given rate, which increases with age.
These health values then determine evolutionary outcomes in a straightforward manner. An upper threshold determines whether agents can reproduce, and a lower threshold determines whether they die, with a given uniform probability.
Agents contain various genetically inherited behavioural information. An agent plays a sound file, chosen from a list. The agent is also given a start time and a range from which they can play sound from the file. Agents also respond to the information about the environment with a neural net which determines how the sound is played back, this includes the scrub position (granular sample playback is used), and grain size, interval and randomness.
The framework supporting this model provides modes for running the agent and sound world in different ways, in particular facilitating batch processing and interactive exploration of the model parameters. See the paper for more details.
If I had to choose my one major achievement during my first year in Melbourne, it would have to be the compilation of this map of great places to eat, which took many evenings of painstaking effort to produce…
I was really interested to read in a PRS mailout about the ongoing conflict between PRS and Google. PRS think that Google should pay them some royalty money for their artists because they are broadcasting all of this great music content for free. Part of me thinks that PRS have a point — you’re trying to make a living from music, but people aren’t buying it, because they can get it for free. And sure, if Google are reaping a great profit then they should share it with their content creators. But then you can see the issues with this principle applied as generally as possible — some video of a man falling on his ass trying to fix his roof could then surely be classed as an artistic creation and gross millions. Hmm, maybe this is the business model of the future, we’ll see.
But here comes the crunch. Have you ever heard of Rickrolling? It’s when you send someone a link to Rick Astley’s “Never Gonna Give You Up” as a trick. They click on the link, thinking it’s some important news item or a wiki on Java programming or something, and Rick’s irritating synth pop anthem starts blaring out of your speakers, to everyone’s shock and amusement. For this reason, I couldn’t believe my eyes when I found the following comment on a website supporting PRS’s efforts. The comment is by Peter Waterman, co-writer of Rick’s hit. Does this make you wonder whether PRS have a point after all?….
“I co-wrote ‘Never Gonna Give You Up’, which Rick Astley performed in the eighties, and which must have been played more than 100 million times on YouTube – owner Google. My PRS for Music income in the year ended September 2008 was £11.
“Music videos and music generally is at the very heart of User Generated Content sites. It is the hard work and creative endeavour of songwriters and musicians everywhere that has been the bedrock upon which many of these websites have been built, creating along the way huge value for their owners. As well as arguing with them over royalty rates, we should be fighting them to get proper recognition for the part we’ve played in building their businesses.”
Can you imagine if Peter got 100 million small royalty payouts as a reward for writing the world’s most annoying pop-tune? Was this really written by Peter Waterman or is it a hoax?