The problem with interactive entertainment is that it is easy to either get carried away with the ‘wow’ factor of things or pile up so much material to grab the player/user’s attention that it can get very confusing. At the early stages of designing Meltdown, we tried our best to keep the concept and the soundscape very simple. We had enough to do on a technical level (the actual building of the sound engine and the logic system) that I did not want to overload us with grand features. Baby steps always, until our feet get bigger!
The general ambience of the game was a bit tricky. We needed something that was responsive (but not too responsive to distract the player), constant (but not irritating) and electronic/digital in nature (without sounding like a typical synth). The final patch consisted of a sound file player crossfading between two tonal files (composed in Logic) feeding through a convolution module (convolved with a creature/animal sample), delay module, cheap reverb module and a granulation module. The granulation and delay times (variable delay ramps create a pitch up/down effect) were controlled by the compass on the iPhone to create some amount of reactiveness. The patch also had an additional module that crossfaded between various ‘creepy’ ambiences designed by Orfeas. Even with all this processing, our CPU load was more than mangeable.
Here’s a sample of what it sounded like:
The patch (click for a larger view):
Next post: the interactive mixing system.
A wormhole is detected.
Nothing is seen. Disturbances are felt. Things can be heard.
You hear creeptures amongst a swarm of sounds.
Find and kill all seven of them. You have ten minutes. You have your ears.
Meltdown is a location based binaural sound game that was developed as a prototype to explore the use of location based technologies and their influence on sound.
The game: a sonic rift and wormhole has been detected at a park and the effects cannot be seen but felt. If the creeptures aren’t killed, the infected area will become a sonic dead zone. You have the technology to listen to them amongst the sounds they have trapped in space. Eliminate them and return the area to normalcy.
The search and destroy concept of the game is not new. What makes it interesting is that the player interacts directly with the environment (within a fixed area). There is no screen. There is no typical game controller. Armed with just an iPhone (which is a weapon and scanner) and their ears, they must walk and think like a hunter. On encountering a creepture they must binaurally locate it, bring it closer to them (using a gesture) and kill it by stabbing the iPhone in the correct direction. The game is immersive not only because it is binaural but also because it includes sounds from the environment. For example, the player might hear the swing moving (properly localised with the correct distance and angle calculations) but won’t see it moving. They might hear someone running across or a dog barking without seeing any of it while they interact with the environment and react to it with their body and minds.
Here’s a video of the gameplay which was recorded live. The soundtrack is binaural, so please use headphones! It shows different snippets of the game – the tutorial, a few kills, location specific sounds and a successful mission.
Below are a few elements of the soundtrack in isolation. Binaural content again, keep those headphones on!
SonicEchoes (the trapped sounds swimming around the environment):
Ambience (a tonal bed that responds to the player’s movements):
We got lots of interesting feedback from a whole lot of people who played the game and most of them wished they had it on their smart phones. We put some statistics together (just because we could!) after the first preview and here’s what was concluded:
I would be happy to break the Max patches and scripts down and show what we did. Maybe I will over the coming weeks.
and Roz Ford for the AI voice.
Made possible with the University of Edinburgh
When I mentioned KeyD on this post on designingsound.org, I did not expect so many downloads. It was a simple and quick app I had made for myself in Max.
For those who don’t know what I’m talking about, it is an app than allows the computer keyboard to be used as a MIDI interface and is inspired by Logic’s caps-lock keyboard.
Because of such a great response I thought it deserved an update. Here’s what is new:
* New GUI with a ‘tighter’ layout (the old black and white made my eyes sore)
* New pitch bend wheel mapped to the [=] and [ _ ] keys with customisable glide time
* New MIDI channel selector (useful if using with a standalone sampler like Kontakt)
* Support for Kyma over OSC (OS X only)
* New MIDI out indicator
* Caps-lock enable/disable
* Support for Windows – this is not a stable version. It has a few bugs which I haven’t had the time to track down. If you are a Max user on windows and would like to look at it, let me know.
More info and downloads here.
The export audio to video options in Pro Tools and Logic rarely work well in my experience. I used to use iMovie to sync audio to video, but anyone who has come an inch close to even opening iMovie knows how enjoyable the experience us. Fortunately, it’s really simple to do this in Quick Time Pro (version 7 not X). If you use Logic Pro, your copy of Quick Time 7 should automatically update to Pro.
- Open .mov that needs audio replaced in QT 7
- Open the new soundtrack in QT 7 (wav/aiff)
- Select the QT window with the .mov, go to Window > Show Movie Properties (Cmd+J)
- Select the audio track and hit delete (to get rid of the existing audio, skip this step if the .mov has no sound)
- Select the QT window with the new soundtrack
- Go to Edit > Select All (Cmd+A)
- Edit > Copy (Cmd+C)
- Select the QT window with the .mov
- Make sure the playhead is at the beginning of the clip (0:00:00)
- Edit > Add to Movie (Option+Cmd+V)
- Export the movie to whatever format you need it to be in with the new soundtrack. File > Export
It’s been about a month and a half since I started using Max/MSP and it’s been a whole lot of fun with many sound revelations. It’s also been responsible for the lack of time to do anything else!
After the ‘Design Toolbox – Flangers’ post I put together for designingsound.org and further inspired by Steve Urban’s comment about trying to find something similar to Logic’s Signal Generator in Pro Tools, I decided to make an application in Max/MSP. What started off as an idea for a simple signal generator – which I planned to have ready in a day – ended up as a signal generator + mangling tool + recorder + rewire application. I had to force myself to stop work on it (for now) because I would add a new feature/fix every time I played around with it – the pitfalls of being both the user and creator. On the whole, it’s a very simple app. Ideally I would like to compile it into a VST/RTAS/AU plugin, but that is currently beyond my grasp. I primarily designed it to work rewired with a DAW, although, it can run standalone.
So, to cut to the chase:
What does it do?
It’s a signal generator that generates sine/triangle/square waves and white/pink noise. It also does sweeps anywhere in the audible frequency range over a custom time period with any of the three oscillators. Read more…