The problem with interactive entertainment is that it is easy to either get carried away with the ‘wow’ factor of things or pile up so much material to grab the player/user’s attention that it can get very confusing. At the early stages of designing Meltdown, we tried our best to keep the concept and the soundscape very simple. We had enough to do on a technical level (the actual building of the sound engine and the logic system) that I did not want to overload us with grand features. Baby steps always, until our feet get bigger!
The general ambience of the game was a bit tricky. We needed something that was responsive (but not too responsive to distract the player), constant (but not irritating) and electronic/digital in nature (without sounding like a typical synth). The final patch consisted of a sound file player crossfading between two tonal files (composed in Logic) feeding through a convolution module (convolved with a creature/animal sample), delay module, cheap reverb module and a granulation module. The granulation and delay times (variable delay ramps create a pitch up/down effect) were controlled by the compass on the iPhone to create some amount of reactiveness. The patch also had an additional module that crossfaded between various ‘creepy’ ambiences designed by Orfeas. Even with all this processing, our CPU load was more than mangeable.
Here’s a sample of what it sounded like:
The patch (click for a larger view):
Next post: the interactive mixing system.
Sound design is a bit like arranging and producing a song. It could be defined as the process where you get all the micro ingredients to work well together (to quote Tomlinson Holman: “Sound design is the art of getting the right sound in the right place at the right time”). The mix is when you cook the ingredients to fit the context. The line can be a bit blurry.
It can get even more blurry with interactive/non-linear mediums. The implementation process for any interactive and reactive medium (games, interactive installations, interactive apps) is one of the most important steps. You might have the most-awesome sound designed in your DAW, but does it feel right when it is implemented? Does the sound roll off and decay as intended? How quickly does it respond? How expressive is it? How repetitive is it? How can it be parametrised to give the user/player the impression that it is reactive to their input? Maybe the answer lies in the way it reacts with the space and with the other sound making objects around it. Or maybe it is the changes in its expression and how it alters over time (over different axis – pitch, volume, ADSR and space).
By trying to answer some of these questions it perhaps might be easier to understand what can be done offline (in the DAW) and what can be done at the implementation stage. Taking a step back and looking at the larger picture (the forest for the trees, THE FOREST for the trees) might help too.
We faced some of these questions while creating Meltdown. Being a prototype project, we were limited by time and resources on one hand and had almost limitless creative options with Max/MSP on the other. While constantly asking ourselves questions (like the ones above) and working quickly, we forced ourselves to step back and look at the larger picture and use the implementation as a design process. The player had to believe that they were interacting with an environment they couldn’t see but only hear. In retrospect, treating the tools as design tools and not technology made the most difference.
First and foremost, it was important for us to setup the environment. We decided that the player would be immersed in an environment in which they would be surrounded by snippets of sounds – almost as if the sounds from the environment were contained within a space and mashed across time (we called it the ‘SonicEchoes’). Orfeas recorded a lot of great material from the meadows – children playing, conversations between parents, traffic, sirens and everything else that can be heard at a children’s park. He edited the best moments and handed them over to me as two or three minute recordings of ambience. In Max, I built a random file playback system which works by reading random snippets off a list of sound files in a random order across a range of randomised pitches. We later added a chance of the files being played back in reverse. The randomisation helped us use a small number of files without it sounding too repetitive. These chunks of audio were further delayed (on a variable delay line to create pitch up/down effects) and granulated to add some chaos. We then used real time spatialisation effects – reverb, distance roll off, binaural panning and doppler to make it sound like the player was surrounded by a swarm of sounds. Since the system ran on a MacBook Pro we really didn’t have much limitations with processing (only the technical limitations of Max/MSP).
Would it have been possible without the implementation-design tricks? Yes. As effective? Probably not.
Here’s a sample. The second half is binaural, so make sure you have your headphones on! The clicks you hear are intentional as there are moments in the game where the sound track gets drowned in static.
More examples in the next post!
Click on the images below for more detail.
A wormhole is detected.
Nothing is seen. Disturbances are felt. Things can be heard.
You hear creeptures amongst a swarm of sounds.
Find and kill all seven of them. You have ten minutes. You have your ears.
Meltdown is a location based binaural sound game that was developed as a prototype to explore the use of location based technologies and their influence on sound.
The game: a sonic rift and wormhole has been detected at a park and the effects cannot be seen but felt. If the creeptures aren’t killed, the infected area will become a sonic dead zone. You have the technology to listen to them amongst the sounds they have trapped in space. Eliminate them and return the area to normalcy.
The search and destroy concept of the game is not new. What makes it interesting is that the player interacts directly with the environment (within a fixed area). There is no screen. There is no typical game controller. Armed with just an iPhone (which is a weapon and scanner) and their ears, they must walk and think like a hunter. On encountering a creepture they must binaurally locate it, bring it closer to them (using a gesture) and kill it by stabbing the iPhone in the correct direction. The game is immersive not only because it is binaural but also because it includes sounds from the environment. For example, the player might hear the swing moving (properly localised with the correct distance and angle calculations) but won’t see it moving. They might hear someone running across or a dog barking without seeing any of it while they interact with the environment and react to it with their body and minds.
Here’s a video of the gameplay which was recorded live. The soundtrack is binaural, so please use headphones! It shows different snippets of the game – the tutorial, a few kills, location specific sounds and a successful mission.
Below are a few elements of the soundtrack in isolation. Binaural content again, keep those headphones on!
SonicEchoes (the trapped sounds swimming around the environment):
Ambience (a tonal bed that responds to the player’s movements):
We got lots of interesting feedback from a whole lot of people who played the game and most of them wished they had it on their smart phones. We put some statistics together (just because we could!) after the first preview and here’s what was concluded:
I would be happy to break the Max patches and scripts down and show what we did. Maybe I will over the coming weeks.
and Roz Ford for the AI voice.
Made possible with the University of Edinburgh