Image Image Image Image Image
Scroll to Top

To Top

Blog

14

Dec
2013

5 Comments

In Software
Sound Design
Work

By Varun Nair

Unity: Controlling game elements with sound

On 14, Dec 2013 | 5 Comments | In Software, Sound Design, Work | By Varun Nair

I’ve been playing around with Unity 3D in my spare time. Verdict? Lots of fun! Thankfully I’ve found it easy to understand because of the many years I spent as a teenager watching my brother work in 3D Studio Max and Maya.

The audio side of Unity is relatively easy (and therefore limited) if you have previous experience in game audio. Getting both the visuals and audio to work is straightforward if you have any experience in object oriented programming. Thankfully C# (I’m no good at Java Script) is similar to Java and C++, both of which I have been getting familiar with over the past year.

With game audio we often come across a one sided process: the game engine feeds the audio engine data and the audio engine outputs sound. It isn’t very often that we see the opposite happening. In Unity (Pro ONLY), from version 3.5 onwards this was made easier with the OnAudioFilterRead callback. OnAudioFilterRead is meant for creating custom filters, but can very well be used to control other game elements. If you don’t have Unity Pro, it is worth downloading the trial and giving it a go.

This post is a quick and simple recipe to control the intensity of a light with sound, but the principles can very easily be expanded to anything else in game.

Unity

Step1: Setup a scene in Unity
Step2: Attach a sound and light source to an object. Attach a sound to the audio source component.
Step3: Create a new script for this object
Step4: Use a smoothening filter to analyse the amplitude of the signal
Step5: Map the amplitude value to the intensity of the light Step6: TA-DA!

Step1

(If you are familiar with Unity you can skip to Step3)

Create a new Unity project. Create a cube object to use as the floor of the scene (Game Object > Create Other > Cube). With the cube selected, use the inspector (the tab on the right, by default) to scale the dimensions of the cube. X: 30 Y: 0.1  Z: 30

Create a sphere (Game Object > Create Other > Sphere). Change its Y position value to 2 in the inspector.

Step2 Read more…

Tags | , ,

01

Jul
2013

No Comments

In Thoughts

By Varun Nair

Design Cross-pollination

On 01, Jul 2013 | No Comments | In Thoughts | By Varun Nair

Sound Design. Two words.

I often wonder if I spend too much time thinking of the first word, instead of the second. It feels easy, even indulgent, to get lost in the world of sound – perfecting microsecond fades, taming that low end, dialling in the perfect attack and release times or tweaking room models. This rabbit hole gets wider when working with software like Max or Pd and more so with real code. The possibilities are endless, given the time, effort and learning – which leads to an even bigger rabbit black hole. So much to learn! So much to perfect! ‘Perfection’ seems to be a never ending loop, that delay line reaching almost infinite feedback the more you try and learn.

I have spent the past seven years completely submerged in sound and for the first time ever I have spent the last nine months juggling a mix of sound and non-sound related work: designing dance games for children (the real world, not computers), coding user interfaces and generative visuals, designing reactive video installations and dabbling in code for mobile apps. Not much of sound on that list. I often wonder if I am on the border of “Jack of all trades, master of none”.

Spending my energy and time away from sound somehow seems to make me a better sound designer (in my opinion, based on the standards I set for for myself). I fuss less about the details and more about the impact of the design. I seem to be listening more to what people get from the design, while taking the blow to my ego and realising how often I can be off the mark with the first draft. Bouncing off ideas and getting feedback can be easier when working in a team, but extremely difficult when working alone. I have come to value friends who are brutal with their feedback.

I seem to have reached a sort of Zen-like conclusion about the different kind of work I have been doing. They share so much in common and seem to cross-pollinate and influence each other. Most importantly, they seem to have the same outcome: reaching out to people and communicating with them through different mediums. The microphones, DAWs, lines of code, Photoshop layers, mobile devices and synthesised sound waves are nothing more than tools that help forge words of a language.

Design seems to be more about understanding human psychology and perception. Don’t get me wrong, those microsecond fades, computer algorithms and learning are important, but only second to the experience they help create.

Tags | , , ,

17

Sep
2012

One Comment

In Recording

By Varun Nair

Some Mozart!

On 17, Sep 2012 | One Comment | In Recording | By Varun Nair

A few months ago I had the privilege of recording and mixing an arrangement of Mozart’s ‘Sonata for Two Pianos in D major, K. 448′ for a guitar quartet for my good friend Nick Humphrey. Somehow, he managed to get four of himself together for the video! Its a great piece and a great arrangement. Check it out!

 

Arranged and performed by Nick Humphrey
Filmed by Duncan Cowles
Recorded at the Reid Concert Hall, Edinburgh
Gear: A couple of Schoeps,  Neumanns and Sontronics microphones. Recorded and mixed on a SSL AWS 900+

Tags | , , , , , ,

02

Aug
2012

No Comments

In Sound Design
Work

By Varun Nair

dusk_drizzle_light

On 02, Aug 2012 | No Comments | In Sound Design, Work | By Varun Nair

In a slight change of flavour, I got myself involved in the making of an album.

G.A.Harry and I performed some improvised ambient-slow-motion music at The Institute Gallery in Edinburgh recently. We thought it turned out quite nice so we chopped it up into an album for the world to listen and consume for free. Official release page: Scntfc Rtns.

If ambient music is your thing, listen to it and let us know what you think of it. Its managed to give me a few good nights of sleep! Share it if you like it!

Some of the sounds are off my upcoming weather inspired ambient music app which should be out in a few weeks.

Recorded live at The Institute Gallery, Edinburgh on 12 July 2012

Mastered by: G.A.Harry

 

 

Tags | , , , , , ,

19

Jul
2012

No Comments

In Software
Sound Design
Work

By Varun Nair

Implementation is Design Pt.2 – Ambience

On 19, Jul 2012 | No Comments | In Software, Sound Design, Work | By Varun Nair

The problem with interactive entertainment is that it is easy to either get carried away with the ‘wow’ factor of things or pile up so much material to grab the player/user’s attention that it can get very confusing. At the early stages of designing Meltdown, we tried our best to keep the concept and the soundscape very simple. We had enough to do on a technical level (the actual building of the sound engine and the logic system) that I did not want to overload us with grand features. Baby steps always, until our feet get bigger!

The general ambience of the game was a bit tricky. We needed something that was responsive (but not too responsive to distract the player), constant (but not irritating) and electronic/digital in nature (without sounding like a typical synth). The final patch consisted of a sound file player crossfading between two tonal files (composed in Logic) feeding through a convolution module (convolved with a creature/animal sample), delay module, cheap reverb module and a granulation module. The granulation and delay times (variable delay ramps create a pitch up/down effect) were controlled by the compass on the iPhone to create some amount of reactiveness. The patch also had an additional module that crossfaded between various ‘creepy’ ambiences designed by Orfeas. Even with all this processing, our CPU load was more than mangeable.

Here’s a sample of what it sounded like:

 

The patch (click for a larger view):

Next post: the interactive mixing system.

Tags | , , , ,

25

Jun
2012

No Comments

In Sound Design
Work

By Varun Nair

Implementation is Design Pt.1

On 25, Jun 2012 | No Comments | In Sound Design, Work | By Varun Nair

Sound design is a bit like arranging and producing a song. It could be defined as the process where you get all the micro ingredients to work well together (to quote Tomlinson Holman: “Sound design is the art of getting the right sound in the right place at the right time”). The mix is when you cook the ingredients to fit the context. The line can be a bit blurry.

It can get even more blurry with interactive/non-linear mediums. The implementation process for any interactive and reactive medium (games, interactive installations, interactive apps) is one of the most important steps. You might have the most-awesome sound designed in your DAW, but does it feel right when it is implemented? Does the sound roll off and decay as intended? How quickly does it respond? How expressive is it? How repetitive is it? How can it be parametrised to give the user/player the impression that it is reactive to their input? Maybe  the answer lies in the way it reacts with the space and with the other sound making objects around it. Or maybe it is the changes in its expression and how it alters over time (over different axis – pitch, volume, ADSR and space).

By trying to answer some of these questions it perhaps might be easier to understand what can be done offline (in the DAW) and what can be done at the implementation stage. Taking a step back and looking at the larger picture (the forest for the trees, THE FOREST for the trees) might help too.

We faced some of these questions while creating Meltdown. Being a prototype project, we were limited by time and resources on one hand and had almost limitless creative options with Max/MSP on the other. While constantly asking ourselves questions (like the ones above) and working quickly, we forced ourselves to step back and look at the larger picture and use the implementation as a design process. The player had to believe that they were interacting with an environment they couldn’t see but only hear. In retrospect, treating the tools as design tools and not technology made the most difference.

First and foremost, it was important for us to setup the environment. We decided that the player would be immersed in an environment in which they would be surrounded by snippets of sounds – almost as if the sounds from the environment were contained within a space and mashed across time (we called it the ‘SonicEchoes’). Orfeas recorded a lot of great material from the meadows – children playing, conversations between parents, traffic, sirens and everything else that can be heard at a children’s park. He edited the best moments and handed them over to me as two or three minute recordings of ambience. In Max, I built a random file playback system which works by reading random snippets off a list of sound files in a random order across a range of randomised pitches. We later added a chance of the files being played back in reverse. The randomisation helped us use a small number of files without it sounding too repetitive. These chunks of audio were further delayed (on a variable delay line to create pitch up/down effects) and granulated to add some chaos. We then used real time spatialisation effects – reverb, distance roll off, binaural panning and doppler to make it sound like the player was surrounded by a swarm of sounds. Since the system ran on a MacBook Pro we really didn’t have much limitations with processing (only the technical limitations of Max/MSP).

Would it have been possible without the implementation-design tricks? Yes. As effective? Probably not.

Here’s a sample. The second half is binaural, so make sure you have your headphones on! The clicks you hear are intentional as there are moments in the game where the sound track gets drowned in static.


SonicEchoes – Before/after on SoundCloud 

More examples in the next post!

Click on the images below for more detail.

 

Tags | , , , , ,

28

May
2012

No Comments

In Software
Sound Design
Work

By Varun Nair

Meltdown – A Binaural Sound Game

On 28, May 2012 | No Comments | In Software, Sound Design, Work | By Varun Nair

A wormhole is detected.

Nothing is seen. Disturbances are felt. Things can be heard.

You hear creeptures amongst a swarm of sounds.

Find and kill all seven of them. You have ten minutes. You have your ears.

 Meltdown is a location based binaural sound game that was developed as a prototype to explore the use of location based technologies and their influence on sound.

The game: a sonic rift and wormhole has been detected at a park and the effects cannot be seen but felt. If the creeptures aren’t killed, the infected area will become a sonic dead zone. You have the technology to listen to them amongst the sounds they have trapped in space. Eliminate them and return the area to normalcy.

The search and destroy concept of the game is not new. What makes it interesting is that the player interacts directly with the environment (within a fixed area). There is no screen. There is no typical game controller. Armed with just an iPhone (which is a weapon and scanner) and their ears, they must walk and think like a hunter. On encountering a creepture they must binaurally locate it, bring it closer to them (using a gesture) and kill it by stabbing the iPhone in the correct direction. The game is immersive not only because it is binaural but also because it includes sounds from the environment. For example, the player might hear the swing moving (properly localised with the correct distance and angle calculations) but won’t see it moving. They might hear someone running across or a dog barking without seeing any of it while they interact with the environment and react to it with their body and minds.

The game has limitations in its current form. GPS accuracy isn’t great (although we found work arounds). Being a prototype that was developed in little time, it does not run natively on the iPhone. Instead, the iPhone communicates with a computer running Max/MSP. The game was completely developed in Max/MSP (and JavaScript). We built most of the systems ground up – interactive sound players, interactive mixer, synthesis modules, granulated file playback systems, dialogue system, gesture identification scripts, location and binaural angle calculators, etc. The binaural processing was made possible (thankfully) with IRCAM’s spat family of objects. Max has its limitations, although, it is fantastic as a prototyping system. Given the time (and budget) it would be great to develop this as a native app (that could be played regardless of the location) and have the freedom to make it sound better with varying layers and levels of complexity and better gameplay.

Read more…

Tags | , , , ,

13

Apr
2012

2 Comments

In Sound Design

By Varun Nair

Partly Procedural Engine Model

On 13, Apr 2012 | 2 Comments | In Sound Design | By Varun Nair

I’m sure you must have seen the new Audi e-sound for e-tron video that has been doing its rounds on the interweb.

I am curious about the technology and implementation techniques used. Is the sound sample based or procedural? Does it alert the driver if there is something wrong with the car? How many speakers does it take to make a realistic experience? Could noise pollution be controlled if there are more speakers in front (on the outside), compared to having them all around? How much of the Audi e-tron sound was inspired by previous Audi engines and the Audi brand?

Of course, there are questions about the chaos (and noise) such customisation could create in the future. Every new technology arrives with so many uncertainties!

Coincidentally, Andy Farnell and I had this discussion a few months ago, while I interviewed him for designingsound.org:

With cars, another place where procedural technology is very powerful is where you want the sound to encode a large vector of changing parameters. Why is it useful to have a sound on a car? By listening to a car engine I can tell a lot about it – is it slowing down, speeding up, is it a large car or small car. I can localise it pretty well. So to replace a completely silent car engine what you want is a procedural sound object which behaves like the car (that is familiar to peoples expectations viz a viz reality – and hence safety) with engine, with tyre sounds, with exhaust simulation to delineate rear and front approach  In fact you could encode all kinds of other information about the car as a safety feature which people would quite quickly get used to. If it is a bus – it could be a bigger noise, if it is a bike it’s got a lighter sound. That would be difficult to do with a sample. So the procedural object would be more versatile and able to to encode more information. That would be argument number two. Argument number three might be that to develop a library of a thousand different car engines would be very expensive. But once Procedural Audio technologies mature I should be able to buy an engine model as a one piece of software and adapt it – I could commission it as a one of piece of software or buy it on a license, put in to my product and I have all the versatility of it.

Back in November I made a prototype of a prototype of a partly procedural car engine. The only samples used are the ignition sounds. The rest is made up of noise, sine tones, some wave shaping, modulation and FFT. I need to refine the model and get it to sound better some time soon (a few glaring imperfections that needs fixing). I wouldn’t call it procedural in the truest sense, as it uses shallow techniques that are based more on sound design principles than the physics of a car. Whatever works. The acceleration slider was controlled with a MIDI controller. What do you think of it?

The sound was inspired by this post.

Party Procedural Engine Model from Varun Nair on Vimeo.

Tags | , , ,

13

Feb
2012

No Comments

In Field Recording
Work

By Varun Nair

GFF: Content – Working with restrictions

On 13, Feb 2012 | No Comments | In Field Recording, Work | By Varun Nair

Five days until we perform at the Glasgow Film Fest and we are on schedule. My Max patch seems to be working well and I couldn’t stop after playing with it for two hours yesterday = a good sign! I can’t wait to release it as a downloadable app and see what people use it for.

Max/MSP aside, this post is about the content – field recordings, voice and music.

Field Recordings

 

As I mentioned in the previous post, Gervais and I spent two days recording in Glasgow. We primarily followed the trail of the poem which covered these locations:

    Read more…

Tags | , , , ,

05

Feb
2012

One Comment

In Thoughts
Tools
Work

By Varun Nair

GFF: Content. Tools. Performance.

On 05, Feb 2012 | One Comment | In Thoughts, Tools, Work | By Varun Nair

In a little over two weeks we will be performing a film at the Glasgow Film Fest – improvised sound, music and video to make a film in real-time. I outlined the project in a previous post and starting with this one (and another three or four that will follow) I will collect and share my ideas behind this project – creative limitations, Max patches, hardware interfacing, field recording, communication, video and the performance itself.

A project such as this is not without problems:

  • Real-time: The project requires us to trigger and process sound and video in real time. A non-linear DAW (Pro Tools, Logic, Nuendo, etc) would be useless simply because they aren’t built for something like this. Other performance-centric solutions (like Ableton Live or Resolume Avenue) might work, but we are still tied down to it’s architecture. Why not create custom audio, video and performance solutions in Max/MSP with specific functionality? Rebuilding a different version of Ableton Live (with a gazillion features) is not only difficult, but impractical and stupid.
  • Balance: There needs to be a fine balance between creativity, purpose and technology. Getting carried away with technology is not cool – we aren’t building tools and performing to a Max/MSP convention of Geeks. It will be an audience made of curious people who might not care if I’m granulating a sound in to a million grains with variable pitch followed by an auto filter and the most awesome reverb ever known to mankind! The only thing they will take away is the emotional impact and experience.
  • Technology: While technology is our solution, it can also be our biggest problem. In creating custom software we not only have to make sure it works but also that it works well. Time has to be invested in making sure it is stable and that it doesn’t crash every five minutes!
  • Communication: This project reminds me of when I used to play in a band. A lot of improvisation is based on trust and giving the other people in the group a chance to take center stage (or not to). Silence isn’t a bad thing. The music, sound effects, voice and video must find their own space and form dynamic and resonating relationships (just like in every other form of audio-visual media).
  • Logistics: Equipment type, content type, equipment reliability, ease of use and communication between performing members (how do we know when to stop or bring the piece to an end?) is important in making sure everything works well.
  • Unpredictability: Even with all this thinking and planning, there will be surprises.

Read more…

Tags | , , , , ,