Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
September 24, 2020
arrowPress Releases







If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Finding the right sound engine : Why Wwise is different

by David McClurg on 09/10/09 12:07:00 pm

2 comments Share on Twitter    RSS

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

On the last game I worked on, Tony Hawk : Ride, we used the Wwise sound engine.  Audiokinetic says Wwise features an advanced audio authoring tool tightly integrated with a robust sound engine.  Let’s examine that claim.

For an audio programmer familiar with FMOD, Wwise offers a very different approach.  Instead of a low-level wrapper that can only load and play sounds, you get some real help handling complex audio behaviors such as fades and containers like random, sequence, blend, and switch.  Game “syncs” allow you to update state settings, switch values, and adjust real-time game parameters.  And then you can build hierarchies of containers.  An audio “bus” can be used to group related sounds such as music, voices, and effects for volume control.  This is all done in the authoring tool so it can be changed and tested by the audio designer without help from a programmer.  Our audio designer loved Wwise.  It has a graph editor that makes changing curves easy for things like speed to pitch ramps.

Let me give you an example.  Say you want a dragging sound that works on different ground types and changes pitch when dragging speed increases.  So you create events in Wwise called “START_DRAGGING_LOOP” and “STOP_DRAGGING_LOOP” and game syncs called “GROUND_TYPE” and “DRAGGING_SPEED”.  This is all the programmer needs to know.  The audio designer now has the freedom to create, modify, and test the behavior such as follows in Wwise.

  • DRAGGING_LOOP Switch container – using “GROUND_TYPE”
    • GRASS_DRAGGING_LOOP Random container
      • GRASS_DRAGGING_1 sound – using “DRAGGING_SPEED” to change pitch
      • GRASS_DRAGGING_2 sound – ditto for other sounds
      • GRASS_DRAGGING_3 sound
    • DIRT_DRAGGING_LOOP Random container
      • DIRT_DRAGGING_1 sound
      • DIRT_DRAGGING_2 sound

After updating the ground type and dragging speed when it changes and triggering the start/stop events during game play, Wwise will do the rest.  Even if your game engine can do this on top of FMOD, the audio behavior descriptions may be scattered across scripts, animation, and physics making bug fixing very difficult.  Or perhaps the tool to create audio behaviors may be limited or clumsy.

Now the programmer, instead of implementing the audio behaviors, just triggers the Wwise event name (string or crc).  At first, this feels too easy and even disturbing.  You don’t manage the list of voices and blocks of sound memory.  The Wwise sound engine won’t even give you an error when you trigger an event that it doesn’t recognize.  For that, you must use the profiler and performance monitor built into the authoring tool.  That’s what I call tight integration!  No point in developing with Wwise if you don’t hook up the profiler.

The authoring tool has a different layout, or interface, when profiling and gives you a log of everything that is happening and how much memory is being used.  We were able to fine tune the memory requirements to the smallest footprint.  To find bugs, we just played the game and watched the profiler log.  Wwise color codes and lets you filter the log so problems stand out quickly.  Any audio behavior adjustments happen in the authoring tool.  This is a huge improvement!  You can watch the cpu performance, streaming buffers, voices playing, or any other detail in real-time.  You can see exactly how much memory is used by the sound banks currently loaded.  You can see when a streaming sound is starving and how many streams are currently playing.

At the end of the day, I can say that we had fewer bugs on TH: Ride and they were easier to fix than previous projects.  Mostly that was due to our limited sound design and location of the audio behaviors in the sound banks rather than scattered across the game logic. The Wwise sound engine is surprisingly tight and robust.  We only found one issue that tech support had to fix and they promptly gave us a patch.  Wwise supported our cross-platform development on the Wii in several ways.  We could tweak conversion settings, disable sounds, and setup work units on a per platform basis.

It would be tough doing another project without Wwise.  The authoring tool is too sweet and the profiling and performance tools are too useful for finding bugs.  Along with Lua, I’ve added it to my must-have list of middleware.  Wwise is also unique in that it supports procedural sound generators with SoundSeed – allowing you to create dynamic audio content.  I’m trying that next.


Related Jobs

Disbelief
Disbelief — Cambridge, Massachusetts, United States
[09.24.20]

Programmer
Phantasma Labs
Phantasma Labs — Berlin, Germany
[09.23.20]

Senior Software Engineer - Unreal Engine specialized in networking
New Moon Production
New Moon Production — Hamburg, Germany
[09.23.20]

Product Manager (all genders)
Square Enix Co., Ltd.
Square Enix Co., Ltd. — Tokyo, Japan
[09.23.20]

Experienced Game Developer





Loading Comments

loader image