[In this opinion piece, former Westwood Studios/EA executive Steve Wetherill, currently running mobile game firm Uztek, weighs in on "poor QA processes" that he's seen, offering eight major steps to improving the speed and efficiency of game testing.]
Over the years, I've dealt with many different video game development QA departments. At Westwood Studios the QA director reported to me for several years, and we shipped many significant titles, the Command & Conquer
I've also been a customer of various QA departments, ranging from independent developer working for many different publishers, to dealing with platform owner (Sony, Nintendo, Sega, Microsoft) test departments, to dealing with "uber QA departments" within large publishers who would QA the work of the QA department (sometimes called QC - Quality Control). So, I've been around the block a few times.
Most recently, my work has been focused on mobile game development, and as an independent developer I get to deal with the internal test groups of mobile publishers. Testing mobile games (ie, games for mobile phones) is particularly tricky due to the variation between handsets which often causes bugs specific to each handset.
Testing games can be a thankless task at the best of times - after all, your job as a tester is to find fault with the work of others. With mobile phones the job is even tougher because, over and above any actual bugs, there are many "certification" issues to deal with, different phone carriers demand different things, from UI to specific API variants.
As a developer, you're targeting all of these different carriers, you might have 5 different "reference builds" which will correspond to 5 different devices, but those devices might run across three or four different carriers each with different requirements. So, both the test department and the developer need to have an encyclopedic knowledge of the certification requirements.
This makes it even more important that the QA department follows best industry practices, otherwise projects can easily get mired down in testing hell. That's bad news for everybody because margins are tight in mobile development, and the last thing publisher or developer needs is to spin wheels in test.
I originally started writing this article a while back out of sheer frustration at the way some publishers seem to literally abuse their external development teams through the QA process. I hoped that things would get better, and that the bad experiences I was having were just anomalies. In fact, that's far from the truth and in reality poor QA processes seem to be indigenous to mobile development.
With this in mind, here are some thoughts based on recent experiences with mobile dev. Actually, these comments probably apply to video game testing in general. And, please excuse me if I seem to be stating the blindingly obvious! If you think that I am, that's probably a good thing because it means that your experience with QA is better than mine, but remember these are based on recent real world examples. Throughout this article, I'll refer to an "issue" as a single item tracked in issue tracking software, and as a "bug" as a single product defect. The two do not always coincide, as you'll see.
1. Make issues specific.
I've often seen issues written like this:
Summary: AUT (application under test) exhibits various choppiness and lagginess.
In principle, this is a valid issue. However, issues like this are not "fixable". Or, if they are, they can only be fixed once the game reaches "gold master" (once the whole game has reached a certain approved level of performance, through optimization or other means). Even though it's a valid issue, it needs to be restated in a more specific way. The issue needs to name the specific areas where "choppiness and lagginess" occurs, and it needs to provide specific steps to repeat.
In fact, this issue may not even warrant a "defect" report as such. It may be more appropriate to include notes on performance as a general application summary report from the QA department to production and development staff rather than try to pin it down as a defect with a too-vague definition.
Solution: Make issue definitions specific, with specific steps to repeat.
2. Report one bug per issue.
I've too-often seen issues reported as follows:
Summary: Menu system exhibits various problems (listed).
Again, this could well be a valid concern. However, a developer addressing bugs must rely on the summary to know what the problem is. Having one issue contain 5 or 10 bugs, or defects, might seem like a more optimal use of test resources, but in practice this will lead to issue that can take forever to close. A developer might address 9 out of 10 sub-issues, but then come across one item that cannot be addressed immediately, so the issue remains open and now the summary is inaccurate, causing the developer to have to maintain his own notes for each such issue. Again, it's not an efficient use of resources to do this. Even though it is going to cause some bloat in the number of bugs, it's best to report one bug per issue.
Solution: Report one "bug" per "issue".
3. Do not morph bugs.
This is one of the most insidious QA anti-techniques. It goes like this:
Summary: AUT crashes on loading screen when pressing the OK key.
So far so good, one issue reported, seems pretty specific, OK, so the developer fixes the bug and marks it as "fixed". Then, to the dismay of the developer the bug comes back as "open (verify failed)" or whatever parlance is used for such things. So, the developer runs the application again, and cannot duplicate the bug. What the unwitting developer did not realize is that this bug has morphed! If only he were to look in the comment field that is no doubt tucked away somewhere within the detail of the bug report, he will see a comment like this:
Tester: this issue seems to be resolved; however, the AUT now crashes when pressing the OK button on the main menu.
In other words, the bug summary now summarizes some other bug than the one this issue is reporting, and that is not sane!
Solution: Close the bug. Open a new one.
4. Don't report "placeholder" or "catch-all" bugs.
This issue occurs especially with Code Review (where a publisher reviews source code submitted by an independent developer to make sure it meets certain coding standards) or Certification (carrier or publisher) related issues. It goes like this:
Summary: Code Review section 4.3 use of API Methods.
The summary does not tell the developer much about the specific issue. A better summary might be
Summary: Code Review section 4.3: Use of prohibited Timer class.
So, on to the detail of the issue, which would go something like this:
Application is using the Timer class inside the CGame.doStuff() function, and this is specifically prohibited by CR 4.3.
Fine, so that will need to be addressed. The developer duly replaces Timer with something else, marks the issue as fixed. But it comes back as open (verify failed). Developer checks his code to see what he might have missed, seems OK. What happened is similar to #3 Morphing Bugs. Digging into the details, we find
Comment: The Timer class has been replaced, but now we have noticed that all methods do not have Javadoc comments, and this is also specifically prohibited by CR 4.3. Additionally, we've noticed that the morphGame() function is using internal exception handling at a low level, and also the tab spacing is set to 4 rather than 3.
This is a variant on the Morphing Bug anti-technique. It's too easy for these things to become adversarial dialogs where the code reviewer just does not want to let go of the issue, so will make efforts to find some other reason to keep the issue open. I've even been told, "Oh yeah - we just keep those open until the end, just in case".
Solution: Just report one bug per issue, with a clear summary, and don't morph the issue as new bugs come along. Just close issues that are fixed and open a new one for new bugs.
5. Do not subvert bug classification.
A conversation happens, it goes something like this:
Developer: So, once all Critical A issues are resolved you guys will approve the (insert milestone) candidate, correct?
Publisher: Well, we'd like you to address any (insert category) issues too.
Developer: Of course, but that is not required for this milestone, correct?
Publisher: Technically, no. But it will really help me (insert reason) if you can address those (insert category) issues.
Developer: We will address them, but contractually we're only required to fix all Critical A issues for (insert milestone).
Publisher: OK ... well we'll review the bug database and make sure any issues we need to be fixed for (insert milestone) are marked as Critical A
Developer: OK ... pause ... but you are not just going to upgrade everything to Critical A for this are you ... ?
Publisher: Laughs ... haha no, don't worry.
Later that day, the developer inspects the bug database, and finds that indeed all "wish list" issues have been upgraded to Critical A, taking the critical list of issues from a handful to 30 or 40. Another conversation happens:
Developer: You said you would not do that.
Publisher: These are the ones we really need fixed.
Regardless of why this happens, it just makes any notion of bug classification meaningless.
Solution: QA Departments - stand your ground! Do not allow production staff to subvert the bug categorization process!
6. Do not economize on bug tracker software licenses.
This one is particularly intrusive, and yet it has happened to a greater or lesser degree with more than one well known mobile publisher over the past 18 months. In one case, the publisher was "in the process" of switching out the bug tracker solution. This is fair enough, but over the 3 month period where this particular project went through the critical alpha-beta-gold master phase, here's what happened on a daily basis:
8:00 AM: Access to the bug tracker is working fine, start to get the day's first progress on the bug database, reviewing issues found by the overnight test crew.
8:06 AM: Log into bug tracker again because the session timeout is set to 5 minutes. The reason for this is explained below.
8:12 AM: Repeat previous step.
- etc -
9:00 AM: Attempt to log into bug tracker. Error message, "Number of users exceeds license limit. Please contact your administrator."
9:15 AM: "Number of users exceeds license limit. Please contact ..."
- etc -
12:00 PM: Attempt to log in, log in works, frenetically scramble to review open issues.
- etc -
01:00 PM: Attempt to log in. Log in fails, "Number of users exceeds ... ".
- etc -
05:00 PM: Attempt to log in, which works. The bug tracker continues to work into the night ...
What had happened was that, clearly, the publisher had more users, which included internal producers and external developers, than they had licenses. Cunningly, the publisher made session timeouts really short - so short in fact that it was hard to even read the detail contained in a new issue before you'd be logged-off the system.
Then, once the production staff rolled into the office @ 9:00AM, they would hog the bug tracker until lunchtime, whereupon they'd leave for lunch and the bug tracker would become available again. Until 1:00PM, whereupon internal production staff returned to their desks and hogged the bug database until clocking-off time. Our producer had no power to fix this, his advice, "Just check the bug database at lunchtime and in the evenings." When you're in a phase of development where the majority of time is spent fixing bugs, clearly this is only going to hurt the schedule.
Solution: This is not rocket science, and cheaping-out on licenses is just being pennywise and pound-foolish - just buy the licenses you need to make things work. Alternatively, if you are a publisher and your bug tracker is buckling under the weight of use - just get one that works!
7. Implement sane versioning.
I don't know why, but it seems that publishers often do not have a valid way for developers to mark bugs as "fixed" in a particular version. The cycle then goes like this:
a) Developer fixes issue and submits code to CVS, etc.
b) Developer goes to bug database to mark issue as fixed.
c) When entering the fix info, developer pauses at the "version fixed" input box. He knows what the last version being tested was, but does not know when the next build will be made (because builds are generated at the publisher), so he guesses at the current QA version plus one. He enters this number in the version field, and submits his bug fix to the bug tracker.
d) Meanwhile ... unknown to the developer, the production department just made another build. This could be for any number of reasons, perhaps there are other developers working on ports and the application is using a global versioning system; perhaps there is an automated build process using something like Cruise Control that automatically revs the version number; perhaps the version number was bumped by hand by a producer making versions for his boss.
e) Now, the QA department sees the bug fix, and grabs the appropriate test version of the code, except the developers code fixes are not in the build, because his CVS check-in came after the build. QA sees the error still exists, and marks the issue as "open (verify failed)".
f) Let's assume that the developer is pretty much in bug fixing mode, and had fixed 10 issues, all of which will now bounce back to him as "open (verify failed)".
g) Developer checks the issues, marks as fixed again, rinse, repeat.
This results in all sorts of wasted time, and all manner of attempts to work around the system. Personally, I would just mark issues as fixed three build revisions into the future, which generally worked, but what a waste of time! In one case, the publisher told me that I needn't worry about what version the issue was fixed in, and that QA would figure it out. Well, they didn't.
Solution: QA departments - just give developers a way to tell you what version they have fixed issues in. It's really easy, and there are any number of ways of doing this, such as tagged CVS submits, etc, etc.
8. Don't test before "alpha"
Alpha is technically described as "feature complete". While there are mitigating circumstances for doing *some* testing pre-alpha, it just kills me when I see issues like this reported:
Summary: application has no sound, or
Summary: application fails to build for [insert device], or
Summary: application has no splash screen, or
Summary: application does not have correct legal text on [insert screen], or
Summary: application has not implemented [insert feature] feature, etc, etc
Really, what is the point? Reporting that features are not implemented in a version that is not supposed to be feature-complete is just an exercise in futility, and it wastes everyone's time.
Solution: Please, don't test before alpha. If you really need to do pre-alpha testing, make it focused on specific areas that are useful to the developer or production team, and please don't put these issues into the bug database where they will sit ignored by everyone, just taking up space and bloating bug counts.
To me, all of this stuff seems pretty obvious. Which is why I just can't understand the mentality of publishers who persevere with such a backward approach to what is, after all, not something that is tremendously complicated. Needless to say - I'm not too impressed with what I've seen lately.