A few months ago I had the privilege of recording and mixing an arrangement of Mozart’s ‘Sonata for Two Pianos in D major, K. 448′ for a guitar quartet for my good friend Nick Humphrey. Somehow, he managed to get four of himself together for the video! Its a great piece and a great arrangement. Check it out!
Arranged and performed by Nick Humphrey
Filmed by Duncan Cowles
Recorded at the Reid Concert Hall, Edinburgh
Gear: A couple of Schoeps, Neumanns and Sontronics microphones. Recorded and mixed on a SSL AWS 900+
In a slight change of flavour, I got myself involved in the making of an album.
G.A.Harry and I performed some improvised ambient-slow-motion music at The Institute Gallery in Edinburgh recently. We thought it turned out quite nice so we chopped it up into an album for the world to listen and consume for free. Official release page: Scntfc Rtns.
If ambient music is your thing, listen to it and let us know what you think of it. Its managed to give me a few good nights of sleep! Share it if you like it!
Some of the sounds are off my upcoming weather inspired ambient music app which should be out in a few weeks.
Recorded live at The Institute Gallery, Edinburgh on 12 July 2012
Mastered by: G.A.Harry
The problem with interactive entertainment is that it is easy to either get carried away with the ‘wow’ factor of things or pile up so much material to grab the player/user’s attention that it can get very confusing. At the early stages of designing Meltdown, we tried our best to keep the concept and the soundscape very simple. We had enough to do on a technical level (the actual building of the sound engine and the logic system) that I did not want to overload us with grand features. Baby steps always, until our feet get bigger!
The general ambience of the game was a bit tricky. We needed something that was responsive (but not too responsive to distract the player), constant (but not irritating) and electronic/digital in nature (without sounding like a typical synth). The final patch consisted of a sound file player crossfading between two tonal files (composed in Logic) feeding through a convolution module (convolved with a creature/animal sample), delay module, cheap reverb module and a granulation module. The granulation and delay times (variable delay ramps create a pitch up/down effect) were controlled by the compass on the iPhone to create some amount of reactiveness. The patch also had an additional module that crossfaded between various ‘creepy’ ambiences designed by Orfeas. Even with all this processing, our CPU load was more than mangeable.
Here’s a sample of what it sounded like:
The patch (click for a larger view):
Next post: the interactive mixing system.
Sound design is a bit like arranging and producing a song. It could be defined as the process where you get all the micro ingredients to work well together (to quote Tomlinson Holman: “Sound design is the art of getting the right sound in the right place at the right time”). The mix is when you cook the ingredients to fit the context. The line can be a bit blurry.
It can get even more blurry with interactive/non-linear mediums. The implementation process for any interactive and reactive medium (games, interactive installations, interactive apps) is one of the most important steps. You might have the most-awesome sound designed in your DAW, but does it feel right when it is implemented? Does the sound roll off and decay as intended? How quickly does it respond? How expressive is it? How repetitive is it? How can it be parametrised to give the user/player the impression that it is reactive to their input? Maybe the answer lies in the way it reacts with the space and with the other sound making objects around it. Or maybe it is the changes in its expression and how it alters over time (over different axis – pitch, volume, ADSR and space).
By trying to answer some of these questions it perhaps might be easier to understand what can be done offline (in the DAW) and what can be done at the implementation stage. Taking a step back and looking at the larger picture (the forest for the trees, THE FOREST for the trees) might help too.
We faced some of these questions while creating Meltdown. Being a prototype project, we were limited by time and resources on one hand and had almost limitless creative options with Max/MSP on the other. While constantly asking ourselves questions (like the ones above) and working quickly, we forced ourselves to step back and look at the larger picture and use the implementation as a design process. The player had to believe that they were interacting with an environment they couldn’t see but only hear. In retrospect, treating the tools as design tools and not technology made the most difference.
First and foremost, it was important for us to setup the environment. We decided that the player would be immersed in an environment in which they would be surrounded by snippets of sounds – almost as if the sounds from the environment were contained within a space and mashed across time (we called it the ‘SonicEchoes’). Orfeas recorded a lot of great material from the meadows – children playing, conversations between parents, traffic, sirens and everything else that can be heard at a children’s park. He edited the best moments and handed them over to me as two or three minute recordings of ambience. In Max, I built a random file playback system which works by reading random snippets off a list of sound files in a random order across a range of randomised pitches. We later added a chance of the files being played back in reverse. The randomisation helped us use a small number of files without it sounding too repetitive. These chunks of audio were further delayed (on a variable delay line to create pitch up/down effects) and granulated to add some chaos. We then used real time spatialisation effects – reverb, distance roll off, binaural panning and doppler to make it sound like the player was surrounded by a swarm of sounds. Since the system ran on a MacBook Pro we really didn’t have much limitations with processing (only the technical limitations of Max/MSP).
Would it have been possible without the implementation-design tricks? Yes. As effective? Probably not.
Here’s a sample. The second half is binaural, so make sure you have your headphones on! The clicks you hear are intentional as there are moments in the game where the sound track gets drowned in static.
More examples in the next post!
Click on the images below for more detail.
A wormhole is detected.
Nothing is seen. Disturbances are felt. Things can be heard.
You hear creeptures amongst a swarm of sounds.
Find and kill all seven of them. You have ten minutes. You have your ears.
Meltdown is a location based binaural sound game that was developed as a prototype to explore the use of location based technologies and their influence on sound.
The game: a sonic rift and wormhole has been detected at a park and the effects cannot be seen but felt. If the creeptures aren’t killed, the infected area will become a sonic dead zone. You have the technology to listen to them amongst the sounds they have trapped in space. Eliminate them and return the area to normalcy.
The search and destroy concept of the game is not new. What makes it interesting is that the player interacts directly with the environment (within a fixed area). There is no screen. There is no typical game controller. Armed with just an iPhone (which is a weapon and scanner) and their ears, they must walk and think like a hunter. On encountering a creepture they must binaurally locate it, bring it closer to them (using a gesture) and kill it by stabbing the iPhone in the correct direction. The game is immersive not only because it is binaural but also because it includes sounds from the environment. For example, the player might hear the swing moving (properly localised with the correct distance and angle calculations) but won’t see it moving. They might hear someone running across or a dog barking without seeing any of it while they interact with the environment and react to it with their body and minds.
Here’s a video of the gameplay which was recorded live. The soundtrack is binaural, so please use headphones! It shows different snippets of the game – the tutorial, a few kills, location specific sounds and a successful mission.
Below are a few elements of the soundtrack in isolation. Binaural content again, keep those headphones on!
SonicEchoes (the trapped sounds swimming around the environment):
Ambience (a tonal bed that responds to the player’s movements):
We got lots of interesting feedback from a whole lot of people who played the game and most of them wished they had it on their smart phones. We put some statistics together (just because we could!) after the first preview and here’s what was concluded:
I would be happy to break the Max patches and scripts down and show what we did. Maybe I will over the coming weeks.
and Roz Ford for the AI voice.
Made possible with the University of Edinburgh
I’m sure you must have seen the new Audi e-sound for e-tron video that has been doing its rounds on the interweb.
I am curious about the technology and implementation techniques used. Is the sound sample based or procedural? Does it alert the driver if there is something wrong with the car? How many speakers does it take to make a realistic experience? Could noise pollution be controlled if there are more speakers in front (on the outside), compared to having them all around? How much of the Audi e-tron sound was inspired by previous Audi engines and the Audi brand?
Of course, there are questions about the chaos (and noise) such customisation could create in the future. Every new technology arrives with so many uncertainties!
Coincidentally, Andy Farnell and I had this discussion a few months ago, while I interviewed him for designingsound.org:
With cars, another place where procedural technology is very powerful is where you want the sound to encode a large vector of changing parameters. Why is it useful to have a sound on a car? By listening to a car engine I can tell a lot about it – is it slowing down, speeding up, is it a large car or small car. I can localise it pretty well. So to replace a completely silent car engine what you want is a procedural sound object which behaves like the car (that is familiar to peoples expectations viz a viz reality – and hence safety) with engine, with tyre sounds, with exhaust simulation to delineate rear and front approach In fact you could encode all kinds of other information about the car as a safety feature which people would quite quickly get used to. If it is a bus – it could be a bigger noise, if it is a bike it’s got a lighter sound. That would be difficult to do with a sample. So the procedural object would be more versatile and able to to encode more information. That would be argument number two. Argument number three might be that to develop a library of a thousand different car engines would be very expensive. But once Procedural Audio technologies mature I should be able to buy an engine model as a one piece of software and adapt it – I could commission it as a one of piece of software or buy it on a license, put in to my product and I have all the versatility of it.
Back in November I made a prototype of a prototype of a partly procedural car engine. The only samples used are the ignition sounds. The rest is made up of noise, sine tones, some wave shaping, modulation and FFT. I need to refine the model and get it to sound better some time soon (a few glaring imperfections that needs fixing). I wouldn’t call it procedural in the truest sense, as it uses shallow techniques that are based more on sound design principles than the physics of a car. Whatever works. The acceleration slider was controlled with a MIDI controller. What do you think of it?
The sound was inspired by this post.
Five days until we perform at the Glasgow Film Fest and we are on schedule. My Max patch seems to be working well and I couldn’t stop after playing with it for two hours yesterday = a good sign! I can’t wait to release it as a downloadable app and see what people use it for.
Max/MSP aside, this post is about the content – field recordings, voice and music.
In a little over two weeks we will be performing a film at the Glasgow Film Fest – improvised sound, music and video to make a film in real-time. I outlined the project in a previous post and starting with this one (and another three or four that will follow) I will collect and share my ideas behind this project – creative limitations, Max patches, hardware interfacing, field recording, communication, video and the performance itself.
A project such as this is not without problems:
- Real-time: The project requires us to trigger and process sound and video in real time. A non-linear DAW (Pro Tools, Logic, Nuendo, etc) would be useless simply because they aren’t built for something like this. Other performance-centric solutions (like Ableton Live or Resolume Avenue) might work, but we are still tied down to it’s architecture. Why not create custom audio, video and performance solutions in Max/MSP with specific functionality? Rebuilding a different version of Ableton Live (with a gazillion features) is not only difficult, but impractical and stupid.
- Balance: There needs to be a fine balance between creativity, purpose and technology. Getting carried away with technology is not cool – we aren’t building tools and performing to a Max/MSP convention of Geeks. It will be an audience made of curious people who might not care if I’m granulating a sound in to a million grains with variable pitch followed by an auto filter and the most awesome reverb ever known to mankind! The only thing they will take away is the emotional impact and experience.
- Technology: While technology is our solution, it can also be our biggest problem. In creating custom software we not only have to make sure it works but also that it works well. Time has to be invested in making sure it is stable and that it doesn’t crash every five minutes!
- Communication: This project reminds me of when I used to play in a band. A lot of improvisation is based on trust and giving the other people in the group a chance to take center stage (or not to). Silence isn’t a bad thing. The music, sound effects, voice and video must find their own space and form dynamic and resonating relationships (just like in every other form of audio-visual media).
- Logistics: Equipment type, content type, equipment reliability, ease of use and communication between performing members (how do we know when to stop or bring the piece to an end?) is important in making sure everything works well.
- Unpredictability: Even with all this thinking and planning, there will be surprises.
When I mentioned KeyD on this post on designingsound.org, I did not expect so many downloads. It was a simple and quick app I had made for myself in Max.
For those who don’t know what I’m talking about, it is an app than allows the computer keyboard to be used as a MIDI interface and is inspired by Logic’s caps-lock keyboard.
Because of such a great response I thought it deserved an update. Here’s what is new:
* New GUI with a ‘tighter’ layout (the old black and white made my eyes sore)
* New pitch bend wheel mapped to the [=] and [ _ ] keys with customisable glide time
* New MIDI channel selector (useful if using with a standalone sampler like Kontakt)
* Support for Kyma over OSC (OS X only)
* New MIDI out indicator
* Caps-lock enable/disable
* Support for Windows – this is not a stable version. It has a few bugs which I haven’t had the time to track down. If you are a Max user on windows and would like to look at it, let me know.
More info and downloads here.
About two months ago, Gervais Harry, Chris Prescott, Fiona Keenan and I worked together on an improvised sound for film project using tools built in Max. We took the performance a bit further (thanks to Gervais’ trickery in Jitter) and created an improvised film – where not only was the soundtrack improvised but the video too. The video was triggered and ‘edited’ on the spot. We would respond to Gervais’ video cuts with sound and he in-turn would respond to the sound through further video edits and visual effects.
The result was something unpredictable, unexpected and completely improvised since there were no rehearsals or a pre-determined plan. It was interesting as the traditional boundaries of film making were broken and it was a true collaboration between visuals and sound. It was a success and we were invited to perform at the Glasgow Film Fest.
We will have to create a 15 minute improvised film themed ‘Glasgow: A symphony of a city’.
How will it work?
We are collaborating with a film maker, Susan Kemp, who will shoot a variety of footage in Glasgow for us. Additionally, we will record sounds in Glasgow, including Glasgow-related poetry written by Fiona Rintoul.
Gervais will control the video side of things (and some sound design), while Chris and I will control the design of the soundtrack. We will have no rehearsals with any of the video and recorded sound material. Our first performance at the festival will be our first attempt. Exciting!
We have built tools in Max/MSP, which not only makes all of this possible but also allows us to communicate and share data with each other as we perform – automatically influencing each other’s tools. Over the next three weeks Gervais and I will blog about the tools and processes we will be using to make this performance possible.