Gamasutra: The Art & Business of Making Gamesspacer
The Game Audio Mixing Revolution
View All     RSS
August 23, 2017
arrowPress Releases
August 23, 2017
Games Press
View All     RSS






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
The Game Audio Mixing Revolution

June 18, 2009 Article Start Page 1 of 5 Next
 

[In the second installment of Rob Bridgett's series on the future of game audio, following his suggestion that audio mixing is primed for a revolution, the Scarface and Prototype audio veteran gathers mixing case studies on titles from LittleBigPlanet to Fable II, and concludes by looking at the next 5-10 years in the field.]

Case Studies

In this installment, I would like to look in depth at several video game audio mix case studies. I think it important for these studies not to contain just examples of projects I have been involved with here at Radical Entertainment, but also the varying tools and techniques that are used on other titles by other developers.

In this way we can begin to see common ground -- as well as the differences in approach from game to game and from studio to studio. So, as well as comments on two of the games that I worked on, I reached out to the audio directors of titles including Fable II, LittleBigPlanet, and Heavenly Sword to explain the mixing process in their own words, as follows:

Scarface: The World Is Yours (2006) Xbox, PS2, Wii, PC
Rob Bridgett, audio director, Radical Entertainment

"Scarface was the first game we had officially "mixed" at Radical, and we developed mixer snapshot technology to compliment some of the more passive techniques such as fall-off value attenuation which we already had.

The mixer system was able to connect to the console and to show the changes of the faders as they occurred at run-time, we could also edit these values live, while listening to the changes at run-time.

The entire audio production (tools and sound personnel -- myself and sound coder Rob Sparks) was taken off site to Skywalker Ranch for the final few weeks of game development. We used an experienced motion picture mixer, Juan Peralta, who worked with us on a mix stage at Skywalker Ranch to balance the final levels of the in-game and cinematic sound.

Mixing time took a total of three weeks: two weeks on the PS2 version (our lead SKU) in Dolby Pro Logic II surround, and a further week exclusively on the Xbox version in Dolby Digital. We also hooked up a MIDI control surface, the Mackie Control, to our proprietary software to further enable the film mixer to work in a familiar tactile mixing environment.

In terms of methodology, we played through the entire game, making notes and/or tweaks as we went. One of the first things we needed to do was to get the output level down, as everything during development had been turned up to maximum volume, in order to make certain features audible above everything else. Once we had established our default mixer level, we tweaked generic mixer events such as dialogue conversation ducks and interior ambience ducks which carried through the entire game.

We also spent time tweaking mixer snapshots for every single specific event in the game too, for example each and every NIS cinematic had its own individual snapshot, so we could tweak levels accordingly. In total we had somewhere in the region of 150 individual mixer snapshots for the entire game, for individual mission specific events, generic events and cinematics. Skywalker also has a home listening room with a TV and stereo speaker set-up, and we would often take the game there to check the mix."


Above: A screenshot (click for full size) of Radical's mixer snapshot technology in our proprietary engine 'Audiobuilder' as used on Prototype. Show in the main window are the various buses, to the left of which are the lists of predefined snapshots. At the bottom left is also a run-time level meter that shows the output levels from the console (whether it be the 360, PS3 or PC)

Prototype: (2009) Xbox360, PS3, PC
Scott Morgan, audio director, Radical Entertainment

"Scott Morgan and myself spent a total of six weeks mixing the in-game content for this title. Three weeks were spent on the Xbox 360 mix; we also spent a further week on the PS3 with the cloned values from the 360. This was because we used a different audio codec on this platform and the frequencies of the sounds that cut through were noticeably different.

We also spent a further two weeks tweaking the mix, as final (late) cinematic movie content drifted in, and we integrated this into the game-flow. In many ways we started mixing a little early -- four weeks prior to beta. At this point in development the game code was fairly fragile and we experienced a lot of crashes and freezes which hindered the mix progress. This meant, unfortunately, that on some days we got very little done, because of these stability issues. These issues tended to disappear after beta, when the game code was considerably more solid.

The methodology used was that we played through the entire game, making notes and tweaking buses as we went. The first day was spent getting the overall levels adjusted to a reasonable output level, we mixed at a reference listening level of 79dB. We compared the game with both Gears of War 2 (a much louder title) and GTA IV (a much quieter title) in order to hit a comfortable mid-ground.

From then on, we played through the game in its entirety, tweaking individual channels as we went, with particular attention on the first two to three hours of gameplay. One of the newer techniques we adopted for the mix was to record the surround output into Nuendo via one of the 8-channel pre-outs on the back of the receiver.

What this allowed us to do was to see the waveform and compare levels with earlier moments in the game for internal consistency. It also gave us instant playback of any audio bugs that we encountered, such as glitches or clicks, which we could get a coder to listen to and debug much faster than having them play through and reproduce. This also gave us instant playback of any sounds that were too quiet, such as dialogue lines for mission-specific events, so we could quickly identify the line and correct the volume.

The game was mixed in a newly constructed 7.1 THX approved mix stage built at Radical Entertainment in 2008. We used our proprietary technology (Audiobuilder), much improved from the Scarface project, but using many of the same mix features and techniques (passive fall-off and reverb tuning with reasonably complex mixer snapshot behavior and functionality). We again used hardware controllers, a Mackie Control Pro + 3 Extenders, that were able to display and give us tactile tuning control over all the bus channels on fader strips.

It has to be said that mixing a game at a reference listening level can be a fatiguing experience, especially over three or four weeks. It is true that the sheer quantity of action and destruction in Prototype adds to this, so we devised a routine of regular breaks and also took whatever opportunity we could to test the mix on smaller TV speakers.

One of the most useful things about the way the Radical's mix studio is equipped is that we have RTW 10800X-plus 7.1 surround meters, which allow us to see clearly and instantly what is coming from what speaker and very quickly debug any surround sound issues or double-check the game's positional routing.

It is interesting to note that we used no LFE in the Prototype game whatsoever. Knowing how much LFE is over-used in video games, it was decided early on that we would rely on crossover to the sub channel from the main speakers to provide the entire low end, in this way it gave us a far more controlled and clean low frequency experience."


Article Start Page 1 of 5 Next

Related Jobs

Respawn Entertainment
Respawn Entertainment — Chatsworth, California, United States
[08.03.17]

Dialogue Lead





Loading Comments

loader image