It was the summer of 2008. We were at the end of the production phase of a large AAA game, for which I was the producer. As many others have been we were in quite a predicament. Our game was choppy and basically every level was out of memory. To make things worse there were still a lot of things that were going to go into the game.
As we were looking at our options we realized that we could scale down on texture usage quite easily, but that wouldn’t take us all the way. We also needed to go through models and reduce them. Looking through the levels with hundreds and hundreds of assets we realized that this would be a monumental task. Of course, there were some that were significantly larger and we could optimize them heavily, sadly that would reduce quality. We could also take out some models from every level, but once again that would be a compromise on quality. We could split levels but that would introduce more loading, which would be frustrating to the players.
No matter how we looked at the problem, we realized that there was a lot of manual labor that had to be done - the release date was in jeopardy.
One morning one of the programmers came into my room with his arms raised in the air ”I found an audio buffer that was 8Mb too big, we can scale that one down.”
We caught a lucky break (or had smart programmers that had hid some memory away for situations like this). If we hadn’t found that memory we would have had to put in several months of work to get the game to run efficiently and within memory.
Does this story ring a bell?
Nowadays game teams will go to great lengths to prevent broken builds. It hasn’t always been like that. In the early days of game development teams were small, there were just a few programmers working on the project. Getting a broken build fixed was just a matter of slapping the person sitting next to you on the head and telling him to fix the problem he caused. The build would be fixed in a matter of minutes.
As the years passed by, teams grew larger and code complexity became an issue. A broken build would not be as fast to fix and the cost of them became a serious problem as the whole team might come to a halt. Teams installed coding practices, rules for what needed to be done to verify a check-in to ensure that the code base was stable.
However, creating a build was not always that easy. Usually, there was this one guy sitting in his room that you had to go to when you wanted a new build. He would start it and put the build on a file server, where you could grab it and deploy it to your test station.
Eventually, teams started adding build systems and server farms that made it easy for everyone to start a build and to distribute any build to your test system. All in the vein of making sure that game is always running.
The asset pipeline has not received the same amount of attention, which is a pity because it's not just about keeping the game running, it's about keeping it running in a representative state.
Game asset quality has a huge effect on the performance of the game. In the end a badly created asset can cause a large scene to run slowly and lead to a lot of time lost. With representative state I mean that memory consumption and performance should be indicative of the final game. The faster the game can be in this state, the fewer problems there will be when getting the game ready for shipping.
In the next post we will look at why content needs to be optimized. You can find it here: