Sound design is a bit like arranging and producing a song. It could be defined as the process where you get all the micro ingredients to work well together (to quote Tomlinson Holman: “Sound design is the art of getting the right sound in the right place at the right time”). The mix is when you cook the ingredients to fit the context. The line can be a bit blurry.
It can get even more blurry with interactive/non-linear mediums. The implementation process for any interactive and reactive medium (games, interactive installations, interactive apps) is one of the most important steps. You might have the most-awesome sound designed in your DAW, but does it feel right when it is implemented? Does the sound roll off and decay as intended? How quickly does it respond? How expressive is it? How repetitive is it? How can it be parametrised to give the user/player the impression that it is reactive to their input? Maybe the answer lies in the way it reacts with the space and with the other sound making objects around it. Or maybe it is the changes in its expression and how it alters over time (over different axis – pitch, volume, ADSR and space).
By trying to answer some of these questions it perhaps might be easier to understand what can be done offline (in the DAW) and what can be done at the implementation stage. Taking a step back and looking at the larger picture (the forest for the trees, THE FOREST for the trees) might help too.
We faced some of these questions while creating Meltdown. Being a prototype project, we were limited by time and resources on one hand and had almost limitless creative options with Max/MSP on the other. While constantly asking ourselves questions (like the ones above) and working quickly, we forced ourselves to step back and look at the larger picture and use the implementation as a design process. The player had to believe that they were interacting with an environment they couldn’t see but only hear. In retrospect, treating the tools as design tools and not technology made the most difference.
First and foremost, it was important for us to setup the environment. We decided that the player would be immersed in an environment in which they would be surrounded by snippets of sounds – almost as if the sounds from the environment were contained within a space and mashed across time (we called it the ‘SonicEchoes’). Orfeas recorded a lot of great material from the meadows – children playing, conversations between parents, traffic, sirens and everything else that can be heard at a children’s park. He edited the best moments and handed them over to me as two or three minute recordings of ambience. In Max, I built a random file playback system which works by reading random snippets off a list of sound files in a random order across a range of randomised pitches. We later added a chance of the files being played back in reverse. The randomisation helped us use a small number of files without it sounding too repetitive. These chunks of audio were further delayed (on a variable delay line to create pitch up/down effects) and granulated to add some chaos. We then used real time spatialisation effects – reverb, distance roll off, binaural panning and doppler to make it sound like the player was surrounded by a swarm of sounds. Since the system ran on a MacBook Pro we really didn’t have much limitations with processing (only the technical limitations of Max/MSP).
Would it have been possible without the implementation-design tricks? Yes. As effective? Probably not.
Here’s a sample. The second half is binaural, so make sure you have your headphones on! The clicks you hear are intentional as there are moments in the game where the sound track gets drowned in static.
More examples in the next post!
Click on the images below for more detail.