So, you're going to make a Massively Multiplayer Online Game? You've got a kick ass game design, the hottest graphics engine on the market, the best programmers and designers in the industry, and plenty of budget for development. Sounds great, but what about your software tools? How will the content make its way from the game design specifications into the world? Of course, you've thought about the tools ("We need them!") and you've planned for their development time ("Um, a bit?"), and they're going to be easy to use ("Ah, I think so?") Right?
Developing tools for MMOs is similar to running any other software project, albeit on a smaller scale. In some respects, your tools are mini-client applications that interface to a subset of the game's full functionality. It's important to follow some aspect of standard software methodology when developing tools; throwing them together ad-hoc or as needed is just asking for trouble. Well-defined and developed software tools will stand the test of time. Remember, long after you move on to "the next big thing," your post-launch Live team will be praising (or cursing) your name for months or years to come.
What exactly is "standard software methodology?" There are all kinds of scientific definitions in software development textbooks, but to me it means "a process by which software can be developed in a reliable and repeatable manner." Here at Turbine Entertainment Software, we develop our software tools using five distinct phases: investigation and requirements gathering, design, prototype and implementation, testing, and polishing. Each of these steps sounds fairly self-explanatory; in the next few pages, I will attempt to identify some of the pitfalls and successes Turbine has encountered at each stage of its tools development process.
Asheron's Call 2: Fallen Kings
Investigation and Requirements Gathering
A period of thorough investigation and requirements gathering provides a solid basis for your software project. This step applies when creating new software tools, or extending an existing tool with new features. We use three methods when gathering requirements: sitting with the users in order to observe their workflow, having the tool developers try their hand using the tools to complete the same work as the user, and brainstorming meetings.
We have found that the most useful method by far is observing users at work. Past sessions illuminated the productivity-halting effect when users were required to switch applications to check files out of revision control. This bottleneck yielded the task: "Integrate automatic revision control checkout functionality into the tools." Trying your hand at a content task while using your own tools is like taking a dose of your own medicine; you will quickly find out what annoying features you may have unwittingly implemented. Brainstorming methods are my least favorite method, unless we've already sat with the users and seen the process. Why? Our Vice President of Engineering, Chris Dyl, summarizes the topic with this pearl of wisdom: "Never tell an engineer what you want, always show them what you are trying to do. Only then will you get a feature that meets your needs." Too often, brainstorming meetings yield lists of unnecessary features, and not lists of problems and end results.
After gathering all of the pertinent requirements, the next phase is to organize them. We do this by determining which features will save the users the most time. Using this "bang for the buck" approach generally associates the most difficult programming tasks with the highest time savings. Removing manual processes and automating lengthy or multi-step processes is the secondary focus: error checking, file copying, auto-generation of code and data, etc. I recently re-read Don Moar's Gamasutra web article "Growing a Dedicated Tools Programming Team: From Baldur's Gate to Knights of the Old Republic". Don discusses a lot of the issues involved with increasing user productivity. Keeping these guidelines in mind, while generating requirements for our software tools, maintains a focus on productivity savings. Following the organization of requirements, circulate the 'completed requirements' document for signoff. This important step ensures nothing is forgotten.
Admittedly, most engineers would rather not write documentation. Some say it's boring, a lot of typing, and isn't doing anything to help get the "real work" done. Documentation provides the significant functions of organization and communication. Indoctrinating your newly hired engineers into the document writing process not only familiarizes them with the task, it also builds good software practices. When the document generation is built into the process, existing engineers become required to complete it. A second "signoff" step further enforces the behavior. The longer you wait to write documentation in your software process, the harder it will be.
With a solid set of requirements in documented form, scheduling can take place. This bleeds over into the design phase of software development, since we often do some investigation into methods and ideas used by existing code. "Scheduling investigation" gives us a rough idea of how much time any particular requirement or task will take. With tools development, however, it is important to balance feature implementation with bugfixing. Regardless of how well code is written, there will be tweaks and bugs that require immediate fixing. Taking the time to schedule those interruptions will provide a sane schedule later on. Additionally, build some padding into your schedule. Depending on the perceived difficulty of the tasks, estimated during investigation, anywhere from twenty to fifty percent extra will cover most crises. Plan for this padding. If you find yourself with extra time in the middle of your development schedule, filling the gaps with smaller improvements or "tier 2" features will garner further appreciation from your users.
The Design Phase
The design phase is where your users will find out if you really understood what they were asking for. It is also the last time to change your mind about requirements and functionality in order to avoid a costly rewrite. We like to create two documents at this stage, a high level design signed off by the content users and a detail design for the engineers. The high level design document contains "use cases" and dialog box mock-ups. A "use case" is a description of how the user will use one facet of the functionality the software tool is providing. Our use cases contain a simple description of a single-user action to begin the process and the expected results. Note that the "use case" does not attempt to tell the engineer how the internals of the process should be designed, but it does give the user some idea of what is happening as they perform each action.
Tools development often involves mock-ups of dialog boxes and other user screens. These are quickly prototyped with Microsoft DevStudio's dialog box editor, and included in the design document as screen captures. We often solicit early feedback on these dialog boxes and prototype the look and feel of data entry before adding them to the document. Descriptions of the dialog box fields, error validation, and 'OK/Cancel' button processing is typically included to flesh out the graphics. Like the requirements specification, signoff is received via the high level design document.
The detailed design document is more for the developer than the content users, and as such can be more technically oriented. Many software development efforts skip the detailed design phase, but it is an important one. During the detailed phase, a single "use case" is taken from the high level design document and broken down to the file and module level. This is the time when APIs for new systems are specified, and multiple engineers can brainstorm the best way to implement an optimal solution. Examining existing code for something similar or identifying the need for a common function becomes apparent. By providing early visibility on problems, the detailed design phase allows the engineer to avoid obstacles that would normally constitute roadblocks during the implementation phase. Spending a little time during the design phase to investigate the complexities of the obstacles saves the engineer time and frustration later on.
The debate rages about whether a detailed design document is necessary, or whether you can jump right in and begin coding. Some engineers prefer to iterate over a "prototype-design-prototype-implement" cycle, while others prefer to prototype while creating the design document. Some amount of detailed design is necessary; the detailed design document helps in organizing code flow, indicating where common code is being written, documenting the task at a source file and module level, and providing a roadmap of distinct steps needed to complete the process. When our engineers get interrupted to fix a problem or re-tasked to a high priority feature, they know that there is a document available that will quickly reset their frame of mind.
Another hot topic is the "living document" debate. Are your design documents updated as time goes by? Or are the documents set in stone? Our high level design document is used as the "set in stone" version; the requirements and design ought to be good enough that the document doesn't need revision during the implementation phase. Additionally, the users and the engineers have agreed on everything in that document, so we all have a common understanding of what is going to be provided. The detailed design is used as the "change over time" version; since it's only for engineering use, the engineer may update it as necessary modifications are encountered during implementation. A good detailed design has easily saved a third of the estimated development time; never mind the headaches that it helps avoid.
Prototype and Implementation
Now that there are solid requirements and a detailed design document, implementing and prototyping the tool should be a piece of cake, right? Well, yes and no. It is true that by following your detailed design, you could implement the functionality desired. Coding is generally straightforward - there is a problem to solve and a variety of ways to do so. There-in lies the problem; given a hundred ways to code a function, a hundred different developers will code it two hundred different ways. How do you provide some amount of consistency in your tool software? Turbine does so with its coding standards and peer-reviews.
"Coding standards" are a well defined set of rules that govern how code is to be written at your company. Naming conventions for variables and members of classes, tab/space indentation schemes, and bracket and brace placement are typical items found in a coding standards document. These coding standard items do increase the readability of the code; an important factor with a large team writing an MMO. However, a coding standard can and should provide as much information about the software writing phase as possible. Some questions you should ask: Does your coding standard have a common copyright blurb set into every code file? If so, what is it? Are there library, file, and function naming conventions? Should your header files conform to a document generation tool such as Doxygen? It is hard to believe that some companies have no standard method for writing software. Coding standards are not only for ISO-9000 or Department of Defense contractors; they play a large part in the development of any software project. If your company plans to license your engine at some point, having and adhering to a strict set of coding standards will provide your customers with a clean, consistent, and uniform API and codebase.
Peer code reviews ensure that multiple sets of eyes (and presumably brains) are looking at every piece of code. The formality of the review process is up to you. At Turbine, we generally have another engineer review code changes at the author's desk prior to checking the code into revision control. Occasionally, the change affects multiple systems and several reviewers are solicited. Often, an engineer will spot his or her own flaws while explaining the code changes to someone else. Because the code is being reviewed at the author's desk, criticism can be given constructively; code reviews held in a conference room with a sheaf of printouts can feel combative to the author. The list of changes spotted by the reviewer are noted either inline or in a separate file, then addressed prior to check-in. The peer-review process has an added benefit: it encourages knowledge sharing; more than one person has visibility into any system we write.