This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Prior to starting work on Brütal Legend, the Double Fine team had spent the previous five years developing Psychonauts -- the last two years of which consisted of a giant, grueling crunch wherein the company lost its initial publisher and nearly shut its doors before ultimately releasing the game.
When the euphoria of having shipped our first title wore off, it was apparent to all of us that Double Fine did not develop games the way other studios did, and that a different system of product development needed to be put in place.
The main cause of Psychonauts' horrifying crunch was due to our continued development of the game features even after the levels were built. With each improvement to the game mechanics came a corresponding rework of all of the levels. Lather, rinse, repeat.
Double Fine, and notably Tim, needed to play the game, live it, breathe it, let it steep over time, and iterate continuously on what makes the game fun and funny.
After research into methodologies, we were drawn to the advantages of agile software development and decided to adopt Scrum. Within the first few months of Brütal Legend development, the team was practicing Scrum, and the initial payoffs were impressive.
Scrum's emphasis on features over systems, on rapid prototyping and iteration, on cross-disciplinary teams, on people over process, and on the creation of a potentially shippable piece of software every sprint/milestone made the game playable at a very early stage in development: by month one we had a renderer, terrain, and a playable character (Eddie Riggs), by month two Eddie could drive his hot rod (the Druid Plow) around the terrain, and by month three Eddie could run over endless numbers of headbangers with his Druid Plow around a terrain height field. Hilarity ensued.
We applied Scrum not only to meta-game creation, but to micro-projects as well. At the very start of the development process, we had no idea how to make an RTS, and had no suitable engine with which to make one. We solved both problems by creating prototypes with an off-the-shelf PC engine and with which a number of our team members had some familiarity -- Unreal 2.5.
The design demands of Brütal Legend were such that trying to develop the game using an existing FPS engine would have proven difficult, but having the initial access to the flexibility of UnrealScript meant we could test some of our early RTS ideas right on our development PCs.
This approach allowed our designers and gameplay programmers to be immensely productive right away, while the programming team went to work building our new engine. This very early glimpse at the design challenges we would face during development, and the opportunity to iterate on something quickly with UnrealScript, gave us invaluable direction into how to architect our new engine and critical insight into the mechanics that would come to define Brütal Legend.
There was a notable downside to Scrum that bears mention in spite of our success with it. Our implementation of Scrum encouraged a pre-production mindset far too long into production. Scrum neither encouraged defensive programming, nor the practice of designing systems that scaled well.
We took a number of systems to 80 percent, enough to prove an idea, or extend and refine it in a future deliverable. But we found in the last few months of development that the remaining 20 percent was another 80 percent of the effort -- leading to some unexpectedly crunchy milestones for the hardest-hit team members. Once the project was in production, because of the reliability and lower risk of art creation, a waterfall approach was more optimal than Scrum.
Even with these caveats, Scrum allowed the team to quickly test wild theories and not only keep the best ones, but also spit-shine them with continuous iteration over the entire course of development. It was a significant process improvement for the studio.
3. Bert's Bots
Double Fine employs a full-time testing army of two. We relied on our two testers to smoke the daily builds of the game, as well as to pound on any new features we intended to get into the next deliverable. It was obvious very early on that two testers alone could not keep up with the stability demands of the game, nor was it financially possible to hire a full time test staff for the duration of the project. Our solution was to develop an automated testing system, which we affectionately named RoBert, after Bert, our software test engineer.
The automated tests began as an experiment, and the initial system took about a month to put in place. The first simple scripts summoned every character and object in the game. They proved immediately useful in finding warnings and crashes.
Running the tests on a regular basis allowed these crashes to be found in a timely manner, narrowing down the causes of the crashes to the more recent changes made to the codebase. Programmers started running automated tests locally so that they could test risky code changes before checking them in. Tests would also be run to help reproduce a bug.
One particularly crafty programmer came up with the idea to borrow idle Xbox 360s and PlayStation 3s to run automated tests. Team members could always end a test and take their machine back, but it was useful and efficient to use idle machines to run tests 24/7. We estimate that automated tests in the bot farm ran for a combined total of 147,000 hours.
The automated test system was so successful that we extended it to include tests with two armies battling it out and balance tests to determine how powerful each unit was. Bot functionality was then added where input could be simulated.
This allowed a test to perform moves exactly as a player would in an actual game. Bots were used to perform attack combos, and a variety of other moves with every squad, as well as to find stuck-spots in the world. They were also used in multiplayer tests, which were invaluable in finding desyncs in our peer-to-peer lockstep networking system.
As long as there were available machines, 1v1 and team multiplayer tests could run on a regular basis. Finally, the bot system was expanded to play through the campaign. Test settings were added for playing through the secondary missions, exercising the Motorforge, upgrading equipment, or failing each mission before completion. Memory reporting was soon added to the campaign tests so programmers could track memory usage and leaks.
Automated tests could be run remotely and crash reports emailed to interested and responsible parties. Doing so significantly simplified running multiple tests simultaneously and tests were often run using differently-configured builds. Early on, most tests ran using a debug build so programmers could attach to a crashed machine and more easily debug the problem.
Late in the project, tests ran a special release build to find release-only crashes. There were some limitations to the system, such as a crashed machine in the bot farm being rebooted by the machine's owner before a programmer could debug the extant crash. And the bot never learned to path or drive, nor could tests be run in the pause or front end menus. We will be making these improvements and additional expansions to RoBert for future projects.