Gamasutra: The Art & Business of Making Gamesspacer
Best Practices: Five Tips for Better Playtesting
View All     RSS
August 1, 2014
arrowPress Releases
August 1, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Best Practices: Five Tips for Better Playtesting

January 23, 2013 Article Start Page 1 of 3 Next
 

One of the most important elements in the lifecycle of game development is playtesting. When you see someone from outside the studio actually sit down and play your game, you're able to better understand whether your game is accessible, usable, and if its mechanics are actually appealing. These are not questions that you want to answer after your game is released to the public. Therefore, data collected from playtesting can be an invaluable tool for mitigating risk and giving your game its best chance at success.

However, there are many ways that playtesting can go wrong, so it's important to establish a process that you're confident in, and to improve it over time. At Arkadium, our playtesting process is a collaboration between game design and marketing, and typically involves inviting players to our office. We believe that the best practices we've identified over time may be helpful to your team as well, so we've boiled them down to these five tips for better playtesting.

1. Recruit Your Target Player

When designing your playtest, you should be keenly familiar with the type of player your game is for. While your game may appeal to a larger audience, it's a mistake to "design for all." We find it helpful to identify personas for our target audience -- for example, "Jane, 25-34 years old, owns an iPhone" -- and try to recruit players who closely match that description.

Some questions we always consider when recruiting are:

  • How old is our intended audience?
  • Is the intended audience predominantly male, female, or split?
  • What device or platform is our game targeting? (Don't recruit players if they've never played a game on that platform before).
  • Does your game assume any prior knowledge from the player?

The last point is often the most important. If you're developing a sequel to a popular first-person shooter (FPS), you might assume that most of your players have played the first game, or perhaps a similar FPS. In this case, it would make sense to recruit playtesters who are already familiar with the basics of FPS games, and solicit feedback on the elements that are unique to your game. If you're developing for a casual audience, however, you may actually want the opposite. We often find that it's imperative to test our game with players who have never played a game's predecessors, especially when testing the game's tutorial or a game that's meant for more casual players.

Every time we have a playtest, we always strive to have some new playtesters who have never tried out our game before. This guarantees we have a fresh set of eyes providing feedback every time, even if we also invite returning players to playtest something new.

We also have a policy against recruiting friends of our employees for playtesting. While their feedback can be very valuable in a pinch, we find that when they know someone personally who has worked on the game, people have a tendency to want to love it more than they would otherwise, which can drastically skew the data.

Of course, it can be difficult to find and recruit playtesters who closely match your target persona. There are likely online communities -- some related to your game, for instance -- that are full of players who would love a chance to visit your office and provide you with feedback. However, while your biggest fans will be eager to help, they tend to be more informed and more engaged with your games than the average player.

If you only collect feedback from your biggest fans, you're much less likely to hear about the problems that new players face on Level 1, and much more likely to hear about how great Level 10 is. We supplement our pool of playtesters by recruiting on sites like Craigslist, FindFocusGroups.com, and Meetup.com.


Article Start Page 1 of 3 Next

Related Jobs

Crystal Dynamics
Crystal Dynamics — Redwood City, California, United States
[07.31.14]

Producer
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States
[07.31.14]

Senior Gameplay Engineer - Treyarch
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States
[07.31.14]

Level Designer - Treyarch
Nexon America, Inc.
Nexon America, Inc. — El Segundo , California, United States
[07.30.14]

Localization Coordinator






Comments


Andre Gagne
profile image
I agree with your best practices as they are mostly from the ISO standard on usability testing. I do have several other questions and comments though:

Observation: It appears that you are combining survey/attitude methods and observational studies (having a survey but then using a think-aloud protocol?). Doing both at the same time adds confounds that threaten the validity of your results.

Question: Also, do you ask questions about the UI in your surveys? I suspect that you are getting most of that data from the observers.

Question: How many participants are you running at a time? it would seem that you are either taking a long time to run a study or have too few participants for any statistics ran on the surveys to have reasonable margins of error.

For anyone interested in this, I would suggest another Gamasutra article that was written a while back by a very good researcher in collaboration with the Games User Researcher SIG.

Part 1:
http://www.gamasutra.com/view/feature/169069/finding_out_what_the
y_think_a_.php
Part 2:
http://www.gamasutra.com/view/feature/170332/finding_out_what_the
y_think_a_.php

Good Luck!

Vin St John
profile image
Great points/questions, Andre. I'll try to address each one:
1) On combining survey and observational methods for collecting information - we use both or only one or the other depending on the situation. Often times a survey is not needed at all, since most of the questions we have can more easily be answered just through observation of player behavior. There are many cases where a survey question will not produce helpful results, as well. When we need to rely on a survey, it is generally only presented to the user after they have played the game and all of our observational notes are recorded. There is still some room for improvement here, but we find that this works for our purposes.

2) Most UI questions are answered through observation. Sometimes we will confirm our observations by presenting the user with a screenshot (in a survey) of the UI and pointing to different elements, asking for a description of what that element is for. This helps us understand if the player knew what they were doing because of the great UI, or if they figured it out without the UI's help. It also helps identify ways that our UI is misleading, i.e. when a player sees a running clock and thinks it represents "time left" instead of "time elapsed".

3) In a single session it is rare for us to exceed 10 playtesters. Because our games are intended to be played for short session lengths, each player usually only plays for about 30 minutes. The data we collect is useful for identifying patterns, but not statistics. (For statistically significant data, we rely on post-release analytics in a live environment with many thousands of players). We consider playtesting to be part of the design process - playtesting requires the intuition of game designers in order to determine whether the problems identified are significant and how to best solve them.

Thanks for sharing the additional resources, it's a great read! We're constantly trying to improve our process, so if you have any criticisms or suggestions I would welcome them. Thanks!

Trevor Cuthbertson
profile image
"We also have a policy against recruiting friends of our employees for playtesting."

This is the greatest advice of all -- the gold of playtesting! Don't hire your friends and family.

Thiago Appella
profile image
Great Article, Vin. Congratz.
Just would like to reinforce 2 of your tips as they are really important in my opinion, but sometimes not followed correctly or without the proper attention.

1) Recruit the right target.
Every product has your own target, sometimes the audience is really broad but still there are some shared elements and requirements that you need to fit while recruiting for playtesting.

2)Group your data.
Don't change your game based on only one play test/feedback. As a more in deep analysis based on aggregated data could show you it was not a pattern across that session - sometimes you just got a person that was not actually on the target you were looking for.

Vijay Srinivasan
profile image
Very good read, thanks a bunch !

Ian Hamilton
profile image
There's something critical missing from the 'always consider when recruiting' bit -

Disabilities.

Regardless of what segment you're looking at, people with disabilities (visual, motor, cognitive and hearing impairments) will account for a huge chunk, so you need to represent them in your recruitment profile.

Numbers depend for the most part on age. At the usual target audience age range for games it's around 15% for what's commonly regarded at disability, with another 8% of males who are colourblind, and 14% who have an adult reading age of below 11 years old.

And that's just amongst the general population, disability is actually more common (20% Vs 15% in PopCap's research) amongst gamers than in the general population.. PWDs have all the same reasons to be gamers as anyone else, plus extra reasons such as limited recreation and social opportunities, as an alternative to pain relief medication, and so on.

If you're testing with kids the numbers are smaller, with visual and hearing impairments in particular pretty rare in children (they're more commonly caused by deterioration accident or disease, none of which have had much chance to happen by the time you're 5 years old), so it's more motor and cognitive impairment that you want to test with for them.

If you're aiming at people who are older it increases pretty rapidly, the 15% becoming 50% by the time you hit 65.

Really helpful conditions to recruit for are colorblindness, dyspraxia and dyslexia, but really if you can just manage to recruit even one person from each of those four top level groups (motor/cognitive/hearing/visual) then you'll get some incredibly useful feedback.

Vin St John
profile image
Ian, these are some really great points. Thanks for sharing (and for the statistics to back it up).

Ian Hamilton
profile image
I'd be happy to chat more about in person if you're interested, will you be at the GUR summit?

Vin St John
profile image
Not sure yet, will let you know! My Gmail is "vinstjohn" if you would like to get in touch about it before then.

Ian Hamilton
profile image
Also I completely agree with the answers to the questions.

You have to test interfaces primarily through observation, as what people say and remember can be very different to what they actually do. There are some things that it can be helpful for asking questions on, if you're specifically looking for feedback for lasting impressions or emotional engagement, but you can't really ask questions after the event about how usabable specific elements of areas of the interface were.

Again in agreement the only way to get statistical significance is to run analytics post-launch, but that doesn't help you when you're in the early stages, small observational studies have lots of value for that.

The statistical significance thing is something that's really critical and often not understood, with people mistakenly believing that what they've seen in a small sample testing session is 'proof'. You need to be aware that not only are there small sample sizes but also an incredible number of uncontrolled variables, it's about as inexact a science as you can find. So instead it needs to be treated for what it is, a way to gather anecdotal suggestions of areas that could be worth looking into.

The way to mitigate against the inaccuracy is simple enough - test early and often.

Virgile Delporte
profile image
Very interesting article, with great tips. I would add something complementary: "Playtest early". Indeed, while playtesting prior to release is essential, providing playable prototypes as early as possible to a carefully targeted group should validate / invalidate / orientate future milestones within the game development process. Same methodologies as listed in your article - also one step ahead.


none
 
Comment: