this chapter, we discuss two important questions in development and
provide a single answer for both. They turn out to be fundamental not
only to the logical structure of the code development process but also
to the production methodology.
Here are the questions:
- What order should tasks be performed in?
- When is the game finished?
worst thing in the world as far as development is concerned is to be
writing system-critical code towards the end of a project. Yet this is
such a common occurrence you would think someone would have spotted it
and put a stop to it. Not only will (and does) it induce huge amounts
of stress in the team, it is absolutely guaranteed to introduce all
sorts of problems in other systems that were considered stable.
we would like to pick an order to perform tasks in that does not lead
to this horror. Ideally, we would like to be able to know in advance
which are the tasks and systems that we need to work on first and which
are those that can wait a while. If we cannot attain this ‘ideal' state
– and I would be foolhardy to suggest we can – we can certainly do
better than writing critical-path code during Alpha or Beta phases in a
How long is a piece of virtual string?
a game is a finite piece of software, it is rather tricky to describe
criteria for “completion.” It is almost universally true that the
functionality and features of games that we see on store shelves are
only some percentage of the development team's ambitions. More will
have been designed than implemented, and not all that was implemented
will have been used. Given then that games rarely reach the “all done”
mark, how are we to decide if a game is releasable? What metrics are
available to inform us how much is actually ‘done and dusted'?
also a problem of scheduling sub-tasks: say a programmer (call her Jo)
has said it'll take 10 days to write the ‘exploding trap' object, and
that she's 4 days into this time. Is her task 40% complete? It's very
hard to tell, especially since we cannot see the trap exploding till
maybe day 9 or 10. But let's be optimistic, and suggest that Jo works
hard and gets the job ‘done' in 8. Usually there is a profit of +2 days
marked up, the task is marked as complete, and everything looks hunky
dory for the project.
on, it turns out that the trap needs to be modified since (say) it
needs to be able to trap larger objects. It's another 4 days of work
for our Jo, and now we have a deficit of –2 days, and suddenly the
project starts to look like it's slipping.
point is this: most objects in a game rarely get written just once.
We'll revisit them over the course of a project to fix bugs, add and
remove features, optimise and maybe even rewrite them entirely. This
isn't a pathological behaviour: almost all significant software systems
grow and evolve over the course of time. How naïve then does the ‘4
days in, 40% complete' metric look? Pretty damn naïve, to put it
politely. What we really need is a system that allows time and space
for evolution without driving projects into schedule loss and the
resulting state of semi-panic that characterises most development
Milestones round my neck
all software development (outside of research, which by its nature it
open-ended) is driven by some kind of milestone system. Let me state
unequivocally now that this is a good thing: the days of anarchic
commercial software development should be buried and remain so.
Nevertheless, just by virtue of it being a ‘good thing' does not mean
that it doesn't come with its own particular set of pros and cons. In
particular, if we accept (for all the ‘pro' reasons) that
milestone-driven development is the way to go then we must also pay
attention to the ‘con' side that will inevitably frustrate our attempts
to make the process work with the efficiency we require for delivery on
time, within budget.
of the most difficult cons games developers have to deal with is the
different way that milestones and the associated schedules are
interpreted by production teams and management. As most of those who
have worked with non-trivial software products, or in fact any large
project that requires multiple bespoke interacting component parts
spanning a variety of disciplines, have come to realise, schedules
represent a team's best guess at how the product will evolve over time.
the other hand, management – perhaps unused to the way that schedules
are produced, perhaps because they require correlation of studio
funding with progress – often read the document completely differently.
They see the document almost as a contract between themselves and
developers, promising certain things at certain times.
disparity between seeing a schedule as a framework for project
evolution to facilitate tracking, and as a binding agreement to deliver
particular features at particular times, causes much angst for both
developers and managers. The former often have to work ridiculous hours
under pressure to get “promised” features out. The latter have
responsibility for financial balances that depend on the features being
Internal and external milestones
We can see that there are some basic premises about milestones that need to be addressed:
who do not work to milestones that mark important features becoming
available in the game will not be able to deliver on time.
who are held to unrealistic milestones will not be able to deliver on
time, irrespective of how financially important or lucrative that may
need to know how long the team thinks development will be and what the
important markers are along the way. Without this there can be no
business plan and therefore no project.
the sort of milestones that managers need be aware of are ‘cruder' or
‘at a lower granularity' than the milestones that developers need to
pace the evolution of the product. We can therefore distinguish between
‘external' milestones, which are broad-brush descriptions of high-level
features with granularity of weeks (maybe even months), and ‘internal'
milestones that are medium and fine-level features scheduled in weeks
therefore never need to know the internal mechanisms that generate the
software. To adopt a programming metaphor, the team can be viewed as a
‘black box' type of object with the producer as its ‘interface'. There
are two types of question (‘public methods', to extend the analogy)
that a manager can ask of a producer:
- “Give me the latest version of the game”
- “Give me the latest (high-level) schedule”
is an unrealistically simple example of interaction between production
and management. The latter will want to know issues of team dynamics,
why things are running late (as they inevitably seem to), and a whole
host of other project-related information. However, it draws a fuzzy –
but distinguishable – line in the sand between the scheduling of
features and accountability for their development.
The breaking-wheel of progress
is one other important sense in which management and develop perceive
milestones differently. It is based on the concept of ‘visibility' and
is without doubt the biggest millstone (half-pun intended) around
developers' necks this side of Alpha Centauri.
ubiquitously in the industry, management refuses to regard features
that they cannot see (or perhaps hear) within a short time of picking
up the game as importantly as those obviously visible (or audible)
ones. For those of us that work on AI, physics, memory managers,
scripting systems, maths, optimisation, bug-fixing and all those other
vital areas of a game's innards that are not open to visual inspection,
this is particularly galling. To spend weeks and months working on
hidden functionality only to have the team's work dismissed as
‘inadequate' because there was no new eye-candy is an all too common
education of managers in the realities of development is a slow,
ongoing and painful process. Meanwhile, we developers have to work with
what we are given, therefore it remains important to – somehow! – build
ongoing visible / audible progress into the development of the project.
is an intimate relationship between the concept of visibility and of
completeness. Many tasks may not become tangibly present until they are
‘complete'. Saying that something is ‘40% complete', even if that were
a rigorously obtained metric, might still amount to ‘0% visible'. So
we'll only be able to fully address the issue of progress when we deal
later with determining ‘completeness' for a task.
Always stay a step ahead
our best – though sometimes a little less – efforts, we will slip. We
shall deliver a feature late or perhaps not even at all, and if the
management is in a particularly fussy mood then there may be much
pounding of fists and red faces. Worse than showing no visible progress
would be to show retrograde progress – fewer features apparent
than a previous milestone. Nevertheless it is a common and required
ability for projects to arbitrarily disable and re-enable particular
functionality within the code base. With the advent of version control
systems, we are now able to store a complete history of source code and
data, so in theory it is always possible to “roll back” to a previous
version of the game that had the feature enabled.
because it's possible, does that make it desirable? In this case, yes.
Indeed, I would argue that working versions of the game should be built
frequently – if not daily at least weekly – and archived in
some sensible fashion. When the management asks production for the
latest version of the game (one of their two allowed questions from the
previous section), then the producer will return not the current (working) build but the one previous to that.
not the current working build? Because it is important to show
progress, and development must ensure that to the best of their ability
the game has visibly improved from one iteration to the next. If it
becomes necessary – and it usually does – to spend time maintaining,
upgrading, optimising or rewriting parts of the code base, then
releasing the next-but-one working version gives another release with
visible improvements before we hit the ‘calm' spot with no apparent progress.
From one point of view, this is a ‘sneaky manoeuvre'. It's no more sneaky than (say) insuring your house against flood1.
Publishers and managers always want to see the ‘latest' version and a
development team itching to impress may well be tempted to show them
it. Resist this urge! Remember, development should be opaque to
management inspection other than through the supplied ‘interface'.
Anything else is just development suicide.
we've decided that rather than work specifically to release code at
external milestones, we'll supply ‘work in progress' builds at these
times. Internally we'll be working to our own schedule. How should we
organise this schedule?
start by assuming that there is a reasonably comprehensive design
document for the game (believe me, you'd be surprised the number of
times there isn't). This document should describe, in brief, what the
game is about – characters (if any), storyline (if any), situations and
rules. Step one to producing an internal schedule is to produce the
Object Oriented design diagram for the game. We are not interested here
in the diagram specifying interrelationships between the objects; the
end goal is simply to produce a big list of all classes that map
directly to concepts in the game. Auxiliary classes such as containers
and mathematical objects need not apply – we are only looking for
classes that map to game-level concepts.
we have produced this list, it needs to be given back to the design
team, as step two is really their call. They need to classify all the
objects in the list (I'll use the terms ‘objects' and ‘features'
interchangeably in this section) into the following three groups:
These are features that form the basis for the game. Without them there
is only a basic executable shell consisting of (some of): startup code,
rendering, memory management, sound, controller support, scripting
support, resource management, etc. Should any of these ‘non-game'
systems require engineering then they should be added to the core
group, which will otherwise contain the most fundamental objects. For
definiteness, consider a soccer game. The most fundamental objects are:
- player (and subclasses)
- stats (determining player abilities)
- pitch (and zones on the pitch)
executable that consists of working versions of these objects (coupled
to the non-game classes) is generally not of playable, let alone
This group of features expands the core functionality into what makes
this game playable and unique. Often these features are more abstract
than core features. They will embody concepts such as NPC behaviour,
scoring systems and rules. Also they will pay some homage to the
particular genre the game will fit into, because rival products will
dictate that we implement features in order to compete effectively. To
continue the soccer example, we might place the following features in
- AI for Player subclasses.
- Referee (either a visible or invisible one that enforces rules)
- Crowd (with context-dependent sounds and graphics)
- Knockout, League and cup competitions.
A game consisting of core and required features will be playable and releasable. Nevertheless, it should be considered the minimal amount of content that will be releasable, and still requires work if the game is to be near the top of the genre.
These are features that provide the ‘polish' for the game. This will
include such things as visual and audio effects, hidden features and
levels, cheats. Features in this group will not alter gameplay in
significant ways, though they will enhance the breadth and depth of the
playing experience and (as with required features) the competition may
dictate their inclusion.
on the type of game, they may be game-related objects. For example in
the soccer game, having assistant referees would be a desired feature,
as the game will function just fine without them.
end result is a list of features that is effectively sorted in terms of
importance to the product. It is tempting to say that the optimal order
of tasks is then to start at the top – the most important ‘core' tasks
– and work our way down. We carry on completing tasks until we run out
it's close, but there's no cigar for that method. There are fundamental
problems in organising work this way. There is little evidence of
anything that resembles ‘continual progress'. In the pathological case,
the game is in bits for the entire development cycle until just before
the end when the bits are pulled together and – hopefully! – fit. This
is guaranteed to have producers and management biting their knuckles
with stress. Furthermore, the most likely outcome is that components do
not work together or have unforeseen side-effects that may involve
radical re-design very late on in the project.
it is correct to do the most important tasks first and the superficial
tasks last (time allowing). But if we wish to show continual
improvement of the product, we shall need to be a little smarter. So we
shall progress to the third phase of the Iterated Delivery method (the
actual ‘iterated' part). We'll start again with the list of features,
which because an Object Oriented Design process has generated them, map
directly to classes.
Consider just one of these classes. How does it start off its life? Usually something like this:
// File Player.hpp
// File Player.cpp
Over the course of the product development, much will be added, much will also be removed, but generally the object evolves.
This evolution can occur in one of two ways: firstly it can start with
zero functionality and end up fully implemented. This is possible, but
not very common. More realistically, the object is either fully or
partially re-written to have more complex or more robust or more
efficient behaviour over the duration.
far, so obvious. But consider the formalisation of the principle that
objects evolve: instead of evolving the feature from zero functionality
at the start to full functionality at the end, consider writing versions of the full object functionality. We define the following four versions of the feature:
- The null version:
This is the initial version of the object interface with no
implementation (empty functions). Note that this is a complete project
that can be compiled and linked that will run, albeit not doing
- The base version:
This has a working interface and shows ‘placeholder' functionality.
Some of the required properties may be empty, or have minimal
implementation. For example, a shadow may be represented by a single
grey sprite; a human character may be represented by a stick-man or a
set of flat-shaded boxes. The intent is that the object shows the most
basic behaviour required by the design without proceeding to full
implementation, and therefore integration problems at the object level
will show up sooner rather than later.
- The nominal version:
This iteration of the feature represents a commercially viable object
that has fully implemented and tested behaviour, and is visually
acceptable. For example: the shadow may now be implemented as a series
of textured alpha-blended polygons.
- The optimal version:
This is the ultimate singing-and-dancing version, visually
state-of-the-art and then some. To continue the shadow example, we may
be computing shadow volumes or using projective texture methods.
We'll refer to the particular phase an object is in at any point in the project as the level of the class. A level 1 object has a null implementation; a level 4 object is optimal.
points to note: first of all, some objects will not naturally fit into
this scheme. Some may be so simple that they go straight from null to
optimal. Conversely, some may be so complex that they require more than
four iterations. Neither of these scenarios presents a problem for us,
since we aren't really counting iterations per se. We're
effectively tracking implementation quality. In the case of an
apparently simple object, we can only test it effectively in the
context of any associated object at whatever level it's at. In other
words, systems and subsystems have a level, which we can define
slightly informally as:
L(subsystem) = min j L(objectj)
L(system) = min i L(subsystemi)
with L () denoting the level of an object, system or subsystem. Applying this idea to the application as a whole,
L(application) = min k L(systemk)
or in simple terms, the application's level is the smallest of its constituent object levels.
we need to put the ideas of level and of priority together to get some
useful definitions, which form the basis of Iterated Delivery.
An application is defined as of release quality if and only if its required features are at the nominal level.
An application is referred to as complete if and only if its desired features are at the optimal level.
these definitions, we see that there is a sliding scale that starts
from a barely releasable product all the way up to implementing and
polishing every feature the design specifies. The product just gets
better and better, and – provided that the tasks have been undertaken
in a sensible order – can be released at any time after it becomes of release quality.