Image Image Image Image Image
Scroll to Top

To Top

Software

14

Dec
2013

5 Comments

In Software
Sound Design
Work

By Varun Nair

Unity: Controlling game elements with sound

On 14, Dec 2013 | 5 Comments | In Software, Sound Design, Work | By Varun Nair

I’ve been playing around with Unity 3D in my spare time. Verdict? Lots of fun! Thankfully I’ve found it easy to understand because of the many years I spent as a teenager watching my brother work in 3D Studio Max and Maya.

The audio side of Unity is relatively easy (and therefore limited) if you have previous experience in game audio. Getting both the visuals and audio to work is straightforward if you have any experience in object oriented programming. Thankfully C# (I’m no good at Java Script) is similar to Java and C++, both of which I have been getting familiar with over the past year.

With game audio we often come across a one sided process: the game engine feeds the audio engine data and the audio engine outputs sound. It isn’t very often that we see the opposite happening. In Unity (Pro ONLY), from version 3.5 onwards this was made easier with the OnAudioFilterRead callback. OnAudioFilterRead is meant for creating custom filters, but can very well be used to control other game elements. If you don’t have Unity Pro, it is worth downloading the trial and giving it a go.

This post is a quick and simple recipe to control the intensity of a light with sound, but the principles can very easily be expanded to anything else in game.

Unity

Step1: Setup a scene in Unity
Step2: Attach a sound and light source to an object. Attach a sound to the audio source component.
Step3: Create a new script for this object
Step4: Use a smoothening filter to analyse the amplitude of the signal
Step5: Map the amplitude value to the intensity of the light Step6: TA-DA!

Step1

(If you are familiar with Unity you can skip to Step3)

Create a new Unity project. Create a cube object to use as the floor of the scene (Game Object > Create Other > Cube). With the cube selected, use the inspector (the tab on the right, by default) to scale the dimensions of the cube. X: 30 Y: 0.1  Z: 30

Create a sphere (Game Object > Create Other > Sphere). Change its Y position value to 2 in the inspector.

Step2 Read more…

Tags | , ,

19

Jul
2012

No Comments

In Software
Sound Design
Work

By Varun Nair

Implementation is Design Pt.2 – Ambience

On 19, Jul 2012 | No Comments | In Software, Sound Design, Work | By Varun Nair

The problem with interactive entertainment is that it is easy to either get carried away with the ‘wow’ factor of things or pile up so much material to grab the player/user’s attention that it can get very confusing. At the early stages of designing Meltdown, we tried our best to keep the concept and the soundscape very simple. We had enough to do on a technical level (the actual building of the sound engine and the logic system) that I did not want to overload us with grand features. Baby steps always, until our feet get bigger!

The general ambience of the game was a bit tricky. We needed something that was responsive (but not too responsive to distract the player), constant (but not irritating) and electronic/digital in nature (without sounding like a typical synth). The final patch consisted of a sound file player crossfading between two tonal files (composed in Logic) feeding through a convolution module (convolved with a creature/animal sample), delay module, cheap reverb module and a granulation module. The granulation and delay times (variable delay ramps create a pitch up/down effect) were controlled by the compass on the iPhone to create some amount of reactiveness. The patch also had an additional module that crossfaded between various ‘creepy’ ambiences designed by Orfeas. Even with all this processing, our CPU load was more than mangeable.

Here’s a sample of what it sounded like:

 

The patch (click for a larger view):

Next post: the interactive mixing system.

Tags | , , , ,

28

May
2012

No Comments

In Software
Sound Design
Work

By Varun Nair

Meltdown – A Binaural Sound Game

On 28, May 2012 | No Comments | In Software, Sound Design, Work | By Varun Nair

A wormhole is detected.

Nothing is seen. Disturbances are felt. Things can be heard.

You hear creeptures amongst a swarm of sounds.

Find and kill all seven of them. You have ten minutes. You have your ears.

 Meltdown is a location based binaural sound game that was developed as a prototype to explore the use of location based technologies and their influence on sound.

The game: a sonic rift and wormhole has been detected at a park and the effects cannot be seen but felt. If the creeptures aren’t killed, the infected area will become a sonic dead zone. You have the technology to listen to them amongst the sounds they have trapped in space. Eliminate them and return the area to normalcy.

The search and destroy concept of the game is not new. What makes it interesting is that the player interacts directly with the environment (within a fixed area). There is no screen. There is no typical game controller. Armed with just an iPhone (which is a weapon and scanner) and their ears, they must walk and think like a hunter. On encountering a creepture they must binaurally locate it, bring it closer to them (using a gesture) and kill it by stabbing the iPhone in the correct direction. The game is immersive not only because it is binaural but also because it includes sounds from the environment. For example, the player might hear the swing moving (properly localised with the correct distance and angle calculations) but won’t see it moving. They might hear someone running across or a dog barking without seeing any of it while they interact with the environment and react to it with their body and minds.

The game has limitations in its current form. GPS accuracy isn’t great (although we found work arounds). Being a prototype that was developed in little time, it does not run natively on the iPhone. Instead, the iPhone communicates with a computer running Max/MSP. The game was completely developed in Max/MSP (and JavaScript). We built most of the systems ground up – interactive sound players, interactive mixer, synthesis modules, granulated file playback systems, dialogue system, gesture identification scripts, location and binaural angle calculators, etc. The binaural processing was made possible (thankfully) with IRCAM’s spat family of objects. Max has its limitations, although, it is fantastic as a prototyping system. Given the time (and budget) it would be great to develop this as a native app (that could be played regardless of the location) and have the freedom to make it sound better with varying layers and levels of complexity and better gameplay.

Read more…

Tags | , , , ,

01

Feb
2012

No Comments

In Software

By Varun Nair

KeyD – Update

On 01, Feb 2012 | No Comments | In Software | By Varun Nair

When I mentioned KeyD on this post on designingsound.org, I did not expect so many downloads. It was a simple and quick app I had made for myself in Max.

For those who don’t know what I’m talking about, it is an app than allows the computer keyboard to be used as a MIDI interface and is inspired by Logic’s caps-lock keyboard.

Because of such a great response I thought it deserved an update. Here’s what is new:

* New GUI with a ‘tighter’ layout (the old black and white made my eyes sore)
* New pitch bend wheel mapped to the [=] and [ _ ] keys with customisable glide time
* New MIDI channel selector (useful if using with a standalone sampler like Kontakt)
* Support for Kyma over OSC (OS X only)
* New MIDI out indicator
* Caps-lock enable/disable
* Support for Windows – this is not a stable version. It has a few bugs which I haven’t had the time to track down. If you are a Max user on windows and would like to look at it, let me know.

More info and downloads here.

Thanks to:

Andrew CaponOSCKymaMIDI object
Ana Roman
– GUI help
Hrishikesh DaniRoel SanchezShaun Farley – Windows testing
Jean-Edouard Miclot – Kyma testing

Tags | , , , , ,

05

Jan
2012

6 Comments

In Software

By Varun Nair

Replacing the audio track in Quick Time

On 05, Jan 2012 | 6 Comments | In Software | By Varun Nair

The export audio to video options in Pro Tools and Logic rarely work well in my experience. I used to use iMovie to sync audio to video, but anyone who has come an inch close to even opening iMovie knows how enjoyable the experience us. Fortunately, it’s really simple to do this in Quick Time Pro (version 7 not X). If you use Logic Pro, your copy of Quick Time 7 should automatically update to Pro.

Here’s how:

  1. Open .mov that needs audio replaced in QT 7
  2. Open the new soundtrack in QT 7 (wav/aiff)
  3. Select the QT window with the .mov, go to Window > Show Movie Properties (Cmd+J)
  4. Select the audio track and hit delete (to get rid of the existing audio, skip this step if the .mov has no sound)
  5. Select the QT window with the new soundtrack
  6. Go to Edit > Select All (Cmd+A)
  7. Edit > Copy (Cmd+C)
  8. Select the QT window with the .mov
  9. Make sure the playhead is at the beginning of the clip (0:00:00)
  10. Edit > Add to Movie (Option+Cmd+V)
  11. Export the movie to whatever format you need it to be in with the new soundtrack. File > Export

Tags | , ,

07

Nov
2011

5 Comments

In Software
Sound Design

By Varun Nair

A Sound Design Tool

On 07, Nov 2011 | 5 Comments | In Software, Sound Design | By Varun Nair

It’s been about a month and a half since I started using Max/MSP and it’s been a whole lot of fun with many sound revelations. It’s also been responsible for the lack of time to do anything else!

After the ‘Design Toolbox – Flangers’ post I put together for designingsound.org and further inspired by Steve Urban’s comment about trying to find something similar to Logic’s Signal Generator in Pro Tools, I decided to make an application in Max/MSP. What started off as an idea for a simple signal generator – which I planned to have ready in a day – ended up as a signal generator + mangling tool + recorder + rewire application. I had to force myself to stop work on it (for now) because I would add a new feature/fix every time I played around with it – the pitfalls of being both the user and creator. On the whole, it’s a very simple app. Ideally I would like to compile it into a VST/RTAS/AU plugin, but that is currently beyond my grasp. I primarily designed it to work rewired with a DAW, although, it can run standalone.

So, to cut to the chase:

What does it do?

It’s a signal generator that generates sine/triangle/square waves and white/pink noise. It also does sweeps anywhere in the audible frequency range over a custom time period with any of the three oscillators. Read more…

Tags | , , , , , , ,