Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Gamasutra: The Art & Business of Making Gamesspacer
Audio Prototyping with Pure Data
View All     RSS
April 19, 2021
arrowPress Releases
April 19, 2021
Games Press
View All     RSS

If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Audio Prototyping with Pure Data

May 30, 2003 Article Start Previous Page 3 of 3

The Adaptive Music Patch

Figure 13

There are many ways to produce adaptive music control structures. Most streaming audio methods use a combination of branching or layering techniques. Entirely generative adaptive structures are also possible which have roots stretching back to the beginnings of electronic music, but those will not be explored within the scope of this article. Branching consists of playing back sampled music phrases and using the game state to decide what phrase to branch to next. Layering consists of breaking the musical phrases into multiple tracks, which can then be mixed and manipulated separately.

Utilizing branching, the Pure Data "Adaptive Music" prototype patch supports three different intensity levels of musical content. At each level, there is an intro phrase, an outro phrase and three variations on a middle looping phrase. When playback starts, it begins with the intro phrase of the appropriate intensity, branches to a random looping phrase variation and continues to do so until the intensity changes. When the intensity changes to a different value, it plays the outro phrase of the current intensity and branches to the intro phrase of the target intensity. This process is encapsulated in the logic of the "sequencer" subpatch.

Figure 14

In combination with the branching control structure, this patch also uses layering techniques. Each phrase consists of three layers which can be modified separately. The fractional portion of the intensity is used to add to the general volume variability of each layer. The lower the intensity is within an intensity range, the more possibility there is variation in the volume level of a layer as defined by the spreadsheet. The layers can typically correspond to rhythm, melody and harmony for layers 1 through 3, respectively.

Figure 15

The combination of branching and layering control structures in this patch support logical transitions between intensity levels, layer variation when the intensity level is static, multiple variations on each intensity loop phrase, variable phrase lengths, and the possibility of silence.

A potential drawback of this system is that it could spend a lot of time in the transition sections if the intensity frequently fluctuates between intensity levels. Smoothing out the dynamics of the intensity values would likely reduce this possibility, and more transition variations would also permit more variation. This patch doesn't incorporate the concept of "stingers" which can be used as overlays to react more quickly to game-state changes. Since sections must play out in their entirety, this can also give the impression that the music isn't reactive enough to changes in the game's intensity. However, this can also be an advantage: if the music is too closely tied to events which the player can control, the player may discover how to "play" the music and subvert its normally passive influence.

Figure 16

This patch works by reading each row in a text file generated by a spreadsheet (shown in Figure 16) in the following format:

<5 parameters for future expansion>

Each five rows of the spreadsheet are grouped as follows: the first line is the intro phrase, followed by three loop phrases, then the outro phrase. See Figure 16 for an example.

Figure 17

The rightmost five columns on each row are "reserved for future expansion" (i.e., whatever needs might arise). There are three intensity level groupings, which totals 15 lines for all the musical phrases.

Figure 18

Each line of data is read into the patch's text-file object and split apart by the unpack object. The trick is that the data must be streamed into a double buffer to permit continuous, uninterrupted playback. If overly taxed, sometimes the audio can break up in Pure Data while it is loading the next buffer. The next phrase's layers and associated parameter data are read ahead of time, as the current phrase is playing. This way the data is ready at the instant the current phrase has completed playing, so the audio playback can immediately start playing when the old one is complete. At this point, the "line~" object is used to fade the current layer volume to the new layer's volume to allow smooth volume transitions.

Figure 19

The "on" subpatch uses the graph-on-parent functionality which lets you hide the functions within the patch, but expose the controls such as the toggle switch on the parent patch. Turning the patch off causes playback to stop at the end of the next phrase.

Figure 20

This patch lets you start trying out compositional ideas with phrases and layers, and immediately audition the changes while the patch is running. Since this patch only streams in the audio as needed, it is possible to change the source data while the patch is running and hear the changes in real time.

Figure 21

Many improvements could be made to this patch, but one of the first ought to be the creation of a weighting system so that certain phrases are more likely to get played after the current phrase than others. This technique is known as a "simple Markov chain" and could be used to create sequences where a phrase normally plays in a certain order, but sometimes plays differently from time to time. It would also be nice to add the possibility of crossfading, especially between the intro and outro phrases. But this would require extra logic and a crossfade buffer.

Implementing A Prototype

Table 2. Pure Data tips & tricks for more advanced users.
  • Use "makefilename" to produce a dynamic send. For example to send to receives vol1 through vol4, use "makefilename vol%d" and connect to a message ";$1 $2" where $2 is the parameter you wish to send.
  • To create objects at run-time use the ";pd-" message.
  • For complex audio effects, look for free VST plugins on the web and use the "vst~" object.
  • Label inlets and outlets of your abstractions with a data type (bang, int, etc.) and a brief description so its easy to remember how to use them in the future and what they do.
  • For network communication, try using the "netsend" and "netreceive" objects to use TCP/IP. This can also be used to communicate with other programs as well.
  • With abstractions, use the dollar sign variables to set static parameters.
  • To make group volume outputs, use throw~ objects to direct your audio at a single catch~ object.

Before a composer gets too far with prototyping, he and the audio programmer should agree which objects are usable, what the maximum CPU utilization should be, the naming conventions for files and patches and so on. The composer might just create partial prototypes for the coder, but even simple ones can help immensely.

During the prototyping process, the coder should aid the composer with implementation issues and technical design decisions. The goal is to have the composer drive the process with support from the coder as needed. Hopefully the composer can stay productive and inspired enough to discover and define new behaviors to apply to his content. Knowing how the content can be controlled may change the manner in which the composer creates the content. The more the composer learns how to formally define interactive behaviors (using Pure Data in this case), the better equipped he will be to when it comes time to describe his goals to the coder.

When the prototype reaches a stage where the composer is pleased with the result, the functionality must be rebuilt by the coder. The audio coder should be familiar enough with Pure Data so that she can understand what the patch is doing and what the important elements are.

The crucial final step in prototyping is when the composer hands the prototype over to the coder for implementation. The composer and coder must trust each other during this process. The coder should feel comfortable enough to make modifications and feel a sense of ownership over the prototype. Many things are difficult to do in Pure Data that are easier in to build in C++ (and vice versa), so it's good to let the coder oversee the technical side of the prototyping process to ensure that the prototype is useful.

A nice benefit to having the coder clean up the prototype in Pure Data is that areas of code which should be reorganized into subpatches or abstractions often become visually obvious. Using software engineering lingo, code cohesion and encapsulation can become visually apparent. This process may even result in the creation of a small library of reusable components that can be applied to future projects. The coder may also find that there are repeated sections which could be better expressed in C++ and custom code objects for the composer to use in the future.

Although the coder will seemingly spend extra time reorganizing and throwing out the prototype code, this time spent will more than likely pay off in the long run with cleaner, easier to maintain code than code which awkwardly evolved through the prototype phase.


Prototype And Conquer

These examples barely scratch the surface of what is possible with prototyping using Pure Data. With advancements in the technology of audio on game platforms, prototyping will become increasingly necessary to harness the new power effectively. The overall goal of this method is to help everyone in the audio team contribute, maximizing their skill sets. The idea is not to turn composers into coders and vice versa; rather, prototyping can help bridge the gap between what can seem like two separate disciplines from the start. Be it technical or creative, everyone should feel open to contribute to the final product and share in the rewards.


Archival images supplied by the Internet Moving Images Archive (at in association with Prelinger Archives from "How to Listen To... New Dimensions in Sound."

[email protected]

Article Start Previous Page 3 of 3

Related Jobs

Insomniac Games
Insomniac Games — Burbank, California, United States

Sr. Technical Sound Designer
Sony PlayStation
Sony PlayStation — San Diego , California, United States

Sound Designer

Loading Comments

loader image