[In this Gamasutra article, Nintendo and Microsoft Game Studios veteran Wilson talks about the value of diverse video game testing, suggesting a formula to make sure that your game debuts with the fewest bugs possible.]
The need for wide-spectrum testing
In the process of software
development, there are constant pitfalls and perils to avoid, both to
application developers and game developers alike. Software testing, one of the
most resource-consuming stages of the development cycle, possesses more than
its fair share of these problems, as well as tending to have an unfortunate
stigma attached to it.
Some developers consider it
a short process to find glaring errors, a necessary evil or a secondary
concern. Consumers consider it to be the stage in which any problem they have
with the product should have been found (often rightly so). Testers themselves
often consider the testing stage to be rushed or insufficient once it ends.
The biggest misconception about software testing is that any one method of
testing is better than another. There is both an art and a science to software
testing, and neither of them should be ignored.
Testing a strict set of
conditions or performing seemingly random tests just aren't enough by
themselves, no matter how extensive the process becomes; both the art and the
science are needed to find as many of the bugs as possible, leaving the
software as functional and polished as possible.
The differences between ad-hoc testing and test cases
Ad-hoc testing, also known as free testing, is a style of testing that
definitely falls into the artistic side of the spectrum. This form of testing
is most often used in game testing, but it can also be found in consumer-use
panels and focus groups, where individuals are brought in to try new software
with very broad goals that they are directed to complete.
As a style, it is very fluid
and often seems random; a game tester may progress along half of a level as the
developer intended only to attempt to jump through a crack in the environment,
causing their character to leave the bounds of the environment and become
stuck, unable to return to the normal flow of play.
When testing a piece of
presentation software, a tester may attempt to loop through the entire
presentation rapidly multiple times, not allowing time for the images or videos
to load, which may stress the available memory to the point that the software
stops responding and locks up.
While these seem like random things to try during the testing process, they are
things that may occur in the real world. This is where ad-hoc testing becomes
an art: finding things that the end-user may attempt that the developers haven't
This may seem simple enough, but the amount of creativity
necessary for this form of testing can sometimes seem staggering.
dungeon in a game, letting it populate halfway, then deciding to go back to
town and save the game quickly may lock up the game -- but if a player forgets
to save their game before entering a dungeon, it could definitely happen.
Plans can certainly be made
to test these sorts of situations, but there's no effective way to plan for
them all. This is where ad-hoc testing is the most useful: testing situations
that may otherwise occur after release that weren't planned for during
development. The number of unusual actions good testers will try when left to
their own devices can be surprising, and they will often find a fair number of
problems that can then be corrected before release.
While this sounds great, and
it often finds some major issues that would otherwise have been unnoticed,
there are a few problems with this method. It's almost impossible to cover all
of a piece of software's functionality in this way; there's often too much
space to cover to allow a testing team to perform free testing without any
Also, it is likely that more
testers will focus on areas that they have a preference for over others, which
will leave some areas of the software with less coverage. To diminish this
problem, many advocates for this style of testing temper their test plans by
assigning ad-hoc testers to specific portions of the software, but this still
isn't enough to compensate for the lack of disciplined testing.
This is the scientific side of testing. Developers and test leads will produce
a list of tests to be performed based on the functions in the product, which
functions interact with other functions, what different parameters there are
for each function, etc
Test cases are the counterpoint to ad-hoc testing;
where ad-hoc testing seems random, test cases are strict and disciplined. They
are used to go over a function, which can be as simple as moving between cells
in a spreadsheet or as complex as casting an intricate spell at a group of
enemies, from every point of view and with each command style that the writer
of the test cases can think of.
Test cases perform where
ad-hoc testing can often lack: they ensure that the most common actions that
will be performed are tested in a large variety of ways in every area of the
This alone is a boon to the testing process, as the color palette that
an ad-hoc tester may take for granted in the software they're testing may have
an incorrect variable call in just one format style, resulting in the desired
blue becoming green. A test case to check the color implementation for each
color in each format would easily catch such problems.
The amount of coverage a title receives through test cases is dependent upon
the people writing the cases. This coverage can be very extensive, especially
when the test cases are written by people with years of experience and an
in-depth knowledge of the functions that need to be tested, but nobody can account
for everything that an end-user may attempt.
There are just too many random
variables to be considered for test cases to cover every possible occurrence.
It's also important to note that some testers may be easily bored by such
strict testing protocols, which in rare cases could result in the test cases
not being completed properly.