This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Michael Carr-Robb-John, Monolith Productions
Anyone that has ever written a game on a console knows about the certification headache. For the most part, certification is simply common sense and good practices, but there are one or two "requirements" that just seem to be nothing more than a way to cause developers problems. One such requirement that we've had to deal with is that from the time the user chooses to run your game, you must be displaying the first presentation screen within four seconds. If you have a large executable, it can take at least two or three seconds just for it to load before you get control, and you still have to load a whole bunch of visuals and sounds in order to present the main menu.
My game was taking 26 seconds from the time the user selected the game to the time it displayed the first presentation screen, so I could already feel that headache starting. My first job was to isolate what was required to display the menus, and put off loading specific global data until it was actually needed. Surprisingly, this had a bigger impact than I was expecting; it knocked off over nine seconds. Many tweaks later, I had managed to knock it down to around 10 seconds, but I just couldn't get it to load any faster. While discussing the problem with another engineer (best problem-solving method I have ever found), the solution dawned on us.
The specific console also has another requirement that two specific screens are displayed by your game before progressing into the menu system. What is also of interest is that the user must be prevented from skipping the screens for a set amount of time. Let's see now -- if we loaded only those two screens, and did most of the menu loading while the player is watching those screens... voila -- 5.5 seconds to load and display.
Unfortunately, that still was not enough to satisfy the letter of the requirement, so we eventually had to ask for a waiver, which was granted (probably because we were so close).
Edward J. Douglas, Flying Helmet Games
I was running cinematics on a long-running racing series. Our scenes were a mix of straightforward startgrids and fancier action sequences. As we iterated on further sequels, our ambitions with the scenes got greater and greater, but our technology iterated at a slower pace.
You see, the cars would be animated by a QA "stunt driver" to get the base motion, then hacked up in a 3D animation program to adjust the timing and placement of the action, and re-exported to our in-game cinematic tool where playback would simulate all the physics and engine behavior. Lots of gameplay data was captured, including gas and brake information from the controller and represented in metadata in the 3D file, then reproduced in-game. The idea was that this would drive the car audio system as well.
The problem came a few sequels down, where our scenes were very complicated, with a mix of hand-animated car action and recorded capture. The old tricks of using the engine metadata to drive simple audio rev samples for our cars wouldn't work anymore -- the data just wasn't there! The audio team couldn't post-process the audio like a movie, because any scene could have any car in it, depending on player's choice and modifications, so the audio needed to be procedural. But by this time, our games had won numerous awards for audio, especially for the car engine sounds, so we were determined to make it work.
Coming on to beta, things were looking dicey, but a combination of ingenuity and madness between our cinematics, audio, and AI engineering teams found the solution. The gas and brake metadata was represented by a float scale on a cube in the 3D scene. If an artist went in and "drew" curves in a keyframe editor like 3DS Max, they could draw in the car engine sounds they wanted. A few members of the audio team rushed to learn 3DS Max, and by using their intuition of how rev patterns should look, they drew in the animation, exported all the scenes, and squeezed it all in in time. It sounded great, but after this last-minute hack, we knew we'd need something more robust if we'd continue with the same tech-base for the next sequel.
...Or so I thought. I left the studio after that game, and a few years later I met an audio guy who had joined that team after I left. It wasn't long before I realized they never upgraded the tech, and he was the "engine rev painter" guy for the latest sequel.