For years we developers had it drummed into our heads to never allow public scrutiny of a game until it was ready for prime time. The (warranted) fear is that negative previews can haunt a game long after it launches, no matter how good it becomes.
Livestreaming has a similar dictum - if you want to build an audience, play the best games, not buggy piles of cow flop. Most game livestreamers are fans of the products we develop. They want to livestream because they enjoy playing these games and want to share the fun with their friends. Why would anyone want to livestream an unfinished game, much less one that might be horribly broken?
We should want this thankless role. By we, I mean the ones making the game – the developers and testers who have to hunt down all those wonderful bugs about which people like to make snarky YouTube videos. After fans began livestreaming their games, developers followed suit, finding this an excellent way to build a community and spread the word. We also began finding other advantages to livestreaming.
One of the newest, and still least recognized, plusses to livestreaming is the role it can have in quality assurance. For those of you who don’t know, quality assurance is the fancy term for hunting down and squashing software bugs. Most developers and playtesters work in quiet environments, focusing on gameplay, creating hypotheses of what should and shouldn’t work, testing these hypotheses, and screaming in frustration when the game refuses to cooperate. Okay, we only rarely scream then. More often, we scramble to pick up pad and pen, jot down some notes, and begin detailing the bugs in our spiffy bug database.
Only half of the job is done when a tester finds a bug. The second phase is communicating that bug to the people who have to fix it – artists, designers and the often-overworked programmer. Testers need to not only describe the bug, but also provide the fastest way to replicate it. The idea is to give the person assigned to fix the bug the fastest route to recreate it, not wasting her time or our company money.
The trouble is, by the time the tester has given the bug a name, assigned it a priority, noted who is responsible for the fix and begun describing it, she may well have forgotten what she was doing in the game to make it go kasplooey. Larger studios often record their playtests, tracking what the tester does on screen (and sometimes offscreen). Unfortunately, indie devs rarely have those resources.
When livestreaming a game, a key component for any streamer is talking through one’s actions and interacting with chat. When I livestream a playtest, I am constantly describing what I am doing. Thus when a bug jumps out of nowhere and scares me, I have a good sense of what brought it about. In addition, people watching in chat weigh in with their own thoughts on the bug. If neither of those works, the livestream has a recording that I can review to see if my description is accurate. If worst comes to worst, I can always pass on the recording to the person assigned the bug so she can recreate it after watching me flail.
I don’t think I’ll ever put away my pad of paper while testing, but livestreaming has made me a lot less reliant on hastily scrawled notes. It provides a series of tools that most indie devs cannot afford and would not have the time to implement if they could.
Even AAA games get released with typos, bad grammar, and poor writing that should have been easy to fix. Even though most text goes through editors before being incorporated into the game, playtesters are the last line of defense. The problem is, most playtesters are not editors. No developer can rely on them to catch the array of bad writing that can sneak in.
One of the fun things about livestreaming is that it is a common practice for streamers to read all the game text aloud as a way to keep the audience involved. Reading text aloud is a great way to catch typos, bad grammar and worse sentence constructs. It really improves a tester’s ability with the language.
And how many indie devs really have the money to pay an editor anyway?
We may not be able to pay editors, but we can get our friends and fans to help for free. Most indie devs already do this, bugging anyone they can to play their game. However, unless you are testing a tutorial, your game is exceptionally user friendly, or you only test with fans of that genre, much playtest time gets lost just explaining how to play and what you want tested. By livestreaming, I get the viewers focused immediately on what I need tested. I raise questions and get free answers. Sometimes I just mess around with the game and get viewer input along the way, never knowing what topics might pop up.
Livestreaming will never replace getting other people to playtest your game on their machines. However, it is a useful tool whenever your ability to do so is limited.
One of the worst parts of playtesting my own games is how often I play them the way I designed them to be played. I had a specific result in mind during design and production, and that is invariably the way the game works best. Unfortunately, that is NOT the way our players will play. They invariably take the game in directions I would never have considered. They try out actions that are obvious to them but mindbogglingly bizarre to me.
Holed up in my development lair, I can never think of all the things they do. Playtesting on a livestream, however, means I have an entire chat feed of interested parties looking over my shoulder, soundlessly giving me input. In the feed I see their comments, questions and suggestions. All of them should be noted (that is just good stream etiquette), answered, and sometimes even acted out.
Any insight I can get into the players’ mindset is valuable. They help break me free of my designer preconceptions. I have always believed developers should test everything they can, and audience feedback has opened whole new opportunities for just that.
There are lots of guides available on how to test games. Unfortunately, we cannot rely on our testers to have read any of them. In streaming, we control the parameters for our testers. This allows us to turn the stream into a master class on how we want our games tested.
I like the Scientific Method, encouraging testers to make hypotheses (for example, walls should stop avatar movement), test them (bump into walls or find those areas where collision detection has been poorly defined), and repeat (after rebooting following particularly nasty crashes where the avatar walks through a wall and falls through the Earth) … and repeat … and repeat …
On the stream, potential testers watch me discuss and demo this. Then, if I send them an alpha copy of the game to test, they know my preferred test style. No, I am not building an army of Mini Mes (not yet), but I do like to think I am improving my audience’s QA skill set.
To see one of our livestreamed playtests, go here: https://youtu.be/uIlEW2Wwe4A
To vote for our Greenlight, go here: http://steamcommunity.com/sharedfiles/filedetails/?id=220478303
This was originally posted at www.andrewgreenberg.com