Gamasutra: The Art & Business of Making Gamesspacer
Notes from the Mix: Prototype 2
View All     RSS
May 23, 2017
arrowPress Releases
May 23, 2017
Games Press
View All     RSS






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Notes from the Mix: Prototype 2

June 20, 2012 Article Start Previous Page 3 of 4 Next
 

In-Game Sound Effects

These were all handled in-house by either Scott Morgan, myself, or Technical Sound Designer Roman Tomazin. The nature of our mixing and live tuning tools means that we are able to work with sound effects from creation to implementation very quickly by getting them into the game (live sound replacement on the PC engine build of the game), and being able to tune the levels while the game is running.

This has enormous implications and advantages for the final mix, as most of the process of designing a sound effect in the game, involves pre-mixing it directly into the context of either a mission or a section of the open-world (as any sound is added it must be assigned to a bus in the mixer hierarchy). You would never submit anything that was too loud or didn't bed into the context in which it was designed. This means that 95 percent of the mixing and balancing work (both horizontal and vertical) is already done by the time you get to the final mix.

Having that real-time contextual information and control at the time you are creating the sound effect assets, is absolutely invaluable at maintaining quality during production, giving ownership over the entire process to the sound designer, and also helping the consistency of the entire sound effects track of the title.

Cutscene Sound

Bridging the gap between the sound design of the in-game world and that of the cutscenes is often a challenge in the realm of consistency. We chose to handle the cutscenes in-house and their production took shape on the same mix stage that we would eventually use to mix the final game. Handling the creation and mix of the cutscenes alongside the mix of the actual game itself had distinct benefits.

Using almost exclusively in-game sound effects as the sound design components for the movies themselves, from backgrounds to weapons, HUD sounds and spot-effects, solidified these two presentation devices as consistent, even though visually, and technically, they existed in very different realms.

The cutscenes were pre-mixed during production and received a final mix just prior to the final mix of the actual game. Overall levels of cut scenes was treated as a consistency priority so that once the level was set for the cutscenes in the game (a single fader in our runtime hierarchy), it need not change from movie to movie, allowing us to use only one kind of mixer snapshot for any Bink-based movie that plays in the game.

Whenever a cutscene plays, we install a single mixer snapshot which pauses in-game sound and dialogue as well as ducking the levels down to zero, with the exception of hud sounds and the bus for the FMV sound itself. There was almost no vertical mixing to be done on the FMV cutscenes once they had been dropped into context, as their dynamic range matched the bookends of the missions pretty well, usually by starting loud, subtly becoming quieter for dialogue, and then building towards a crescendo again at the end of each scene. This worked well with the musical devices (endings) employed in-game to segue into the movies.

NIS scenes, or cutscenes that used the in-game engine, had a slightly different approach on the runtime vertical mix side of things. As the game sound was still running while the scenes played, within different contexts in different parts of missions, we often needed to adjust the levels of background ambience, or music running in the game depending on how much dialogue was present in the scene. For each one of these sequences we had an individual snapshot, which allowed us to have specific mixes for each specific scene and context.

The Final Mix

So, how did this all come together for the vertical final mix? We are incredibly fortunate to have an in-house PM3 certified mix stage in which to do our mix work on the game. This is incredibly valuable not only during the final mix phase of post-production, but also, as previously alluded to, in the creation of the cutscene assets and the pre-mixing of the music. The room is central to our ability to assess the mix as we develop the game and provides touchstones at various key milestones.

Essentially we try to think about the vertical areas of the mix at major milestones, rather than leaving it all to the end. Because of this, the overall quality of the game`s audio track is center stage throughout our entire production process because we can listen to, and demonstrate, the game there at any time to various stakeholders.

Vertical Mixing: Considering Context

The play through / final mixing process was essentially a matter of adjusting levels 'in context', even though all dialogue was mastered to be consistent at -23RMS, the context in which the lines played (i.e. against no music, or against high intensity battle music) dictated the ultimate levels those files needed to play back at. Similarly with the music and fx, the context drove all the mixing and adjustment decisions of each music cue.

With all our assets, mastering/pre-mixing on the horizontal scale got us 75 percent of the way there, individual contextual tweaking (vertical), got us the rest of the way. One would not have been possible without the other. The final "mix" phase was scheduled from the outset of the project, and took a total of three weeks.

Reaching the Limitations of State-Based Mixer Snapshots

Our state-based mixer snapshot system really got pushed to its limits on this title. There were several areas where multiple ducking snapshots being installed at the same time created bugs and problems, such as in the front end, which was a ducking snapshot, when a line of dialogue mission dialogue was also played. Transitions between different classes of mixers such as 'default' to 'duck' were also somewhat unpredictable and caused 'bumps' in the sound.

A combination of new user-definable behaviour for multiple ducks, side chaining and default snapshots will be a future focus for us, as we try to handle the overall structural mixing (such as movie playback, menus, pausing and various missions) with active snapshots, and the passive mixing with auto-ducking and side chaining.

As it is, the entire mix for the game was done using state-based snapshots, so everything from dialogue to different modes (such as hunting), was handled in this way. The complexity of handling an entire game via this method also became evident as there were almost too many snapshots listed to be able to navigate effectively (an unforeseen UI issue in the tools).

Having said this, the system we have developed is intuitive, robust and allows deep control, having snapshots not only handle gain levels, but panning (even the ability to override the panning of 3D-game objects), environmental reverb send amount, high and low pass filters, and fall-off distance multipliers, all dynamically on a per bus basis. This gave us a huge amount of overall control of the sound in the game, so that rather than being an unwieldy out-of-control emergent destruction-fest, the big moments in game play could be managed quite efficiently and allowed a clear focus on 'the big picture consistency' for the entirety of the final mix.


Article Start Previous Page 3 of 4 Next

Related Jobs

Deluxe Entertainment Services Group
Deluxe Entertainment Services Group — Santa Monica, California, United States
[05.04.17]

VR Sound Engineer





Loading Comments

loader image