Gamasutra: The Art & Business of Making Gamesspacer
Best Practices: Five Tips for Better Playtesting
View All     RSS
September 23, 2014
arrowPress Releases
September 23, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Best Practices: Five Tips for Better Playtesting

January 23, 2013 Article Start Previous Page 3 of 3
 

4. Rock the Survey

You'll almost always want to ask your players some questions after they're done playing. Try to predict these questions ahead of time and put them in survey form so that you collect the same set of data for every playtester. If you have any spontaneous follow-up questions, you can ask them when they're done taking the survey.

A carefully worded, highly focused survey can make sharing your playtest results with the team much easier. Focus on questions that directly address the goals of your playtest. Leading questions like "Was the tutorial confusing?" are much less helpful than questions that test a player's knowledge, like "Please describe what the green button does in this game."

If your playtesters are newcomers to your game or genre, they are probably unfamiliar with many of the terms and conventions that your team may take for granted -- plan accordingly by using plain, descriptive terms in your questions whenever possible.

Our surveys always include some basic player profile questions like "What games have you been playing in the last month?" to give us context. In addition, every multiple-choice question includes an optional space for written explanation.

5. Analyze in Aggregate?

After a playtest, you're going to have survey results and notes from many different playtesters and observers. Compile that data quickly and share it with your team in aggregate, without offering analysis or drawing conclusions.

Analyze the results as a group, and start on the aggregate data -- your game designer may have seen one player who thought the game was too easy, but is that what everyone else saw? If all the other players said the game was difficult then you know that "too easy" is not a trend. If you like you can still return to that playtester's feedback afterward and address it as a special case. Analyzing your data in aggregate first will guarantee that your entire team benefits from the full playtest.

Once you've got your data compiled, don't just forget about it! Identify the issues that your playtest has brought to light and prioritize your next steps. The data you collect during your playtest -- which can include anything from the player's emotional state, to the number of failed attempts to click a button -- should always help you draw conclusions and come away with action items for your team. If your data is inconclusive, consider revisiting the structure of your playtest and survey for next time.

These are the best practices we've developed at Arkadiun's New York headquarters, and they have helped us implement recurring playtesting in a consistent and reliable way. Of course, we're always improving our process, and what works well for us might not suit the needs of your studio. If you find any of these tips helpful, or have a different way of doing things, let us know in the comments section.


Article Start Previous Page 3 of 3

Related Jobs

Raven Software / Activision
Raven Software / Activision — Madison, Wisconsin, United States
[09.23.14]

Network Engineer - Raven
Raven Software / Activision
Raven Software / Activision — Madison, Wisconsin, United States
[09.23.14]

Senior FX Artist - Raven
Raven Software / Activision
Raven Software / Activision — Madison, Wisconsin, United States
[09.23.14]

Lead Designer - Raven
InnoGames GmbH
InnoGames GmbH — Hamburg, Germany
[09.22.14]

Senior Producer (m/f)






Comments


Andre Gagne
profile image
I agree with your best practices as they are mostly from the ISO standard on usability testing. I do have several other questions and comments though:

Observation: It appears that you are combining survey/attitude methods and observational studies (having a survey but then using a think-aloud protocol?). Doing both at the same time adds confounds that threaten the validity of your results.

Question: Also, do you ask questions about the UI in your surveys? I suspect that you are getting most of that data from the observers.

Question: How many participants are you running at a time? it would seem that you are either taking a long time to run a study or have too few participants for any statistics ran on the surveys to have reasonable margins of error.

For anyone interested in this, I would suggest another Gamasutra article that was written a while back by a very good researcher in collaboration with the Games User Researcher SIG.

Part 1:
http://www.gamasutra.com/view/feature/169069/finding_out_what_the
y_think_a_.php
Part 2:
http://www.gamasutra.com/view/feature/170332/finding_out_what_the
y_think_a_.php

Good Luck!

Vin St John
profile image
Great points/questions, Andre. I'll try to address each one:
1) On combining survey and observational methods for collecting information - we use both or only one or the other depending on the situation. Often times a survey is not needed at all, since most of the questions we have can more easily be answered just through observation of player behavior. There are many cases where a survey question will not produce helpful results, as well. When we need to rely on a survey, it is generally only presented to the user after they have played the game and all of our observational notes are recorded. There is still some room for improvement here, but we find that this works for our purposes.

2) Most UI questions are answered through observation. Sometimes we will confirm our observations by presenting the user with a screenshot (in a survey) of the UI and pointing to different elements, asking for a description of what that element is for. This helps us understand if the player knew what they were doing because of the great UI, or if they figured it out without the UI's help. It also helps identify ways that our UI is misleading, i.e. when a player sees a running clock and thinks it represents "time left" instead of "time elapsed".

3) In a single session it is rare for us to exceed 10 playtesters. Because our games are intended to be played for short session lengths, each player usually only plays for about 30 minutes. The data we collect is useful for identifying patterns, but not statistics. (For statistically significant data, we rely on post-release analytics in a live environment with many thousands of players). We consider playtesting to be part of the design process - playtesting requires the intuition of game designers in order to determine whether the problems identified are significant and how to best solve them.

Thanks for sharing the additional resources, it's a great read! We're constantly trying to improve our process, so if you have any criticisms or suggestions I would welcome them. Thanks!

Trevor Cuthbertson
profile image
"We also have a policy against recruiting friends of our employees for playtesting."

This is the greatest advice of all -- the gold of playtesting! Don't hire your friends and family.

Thiago Appella
profile image
Great Article, Vin. Congratz.
Just would like to reinforce 2 of your tips as they are really important in my opinion, but sometimes not followed correctly or without the proper attention.

1) Recruit the right target.
Every product has your own target, sometimes the audience is really broad but still there are some shared elements and requirements that you need to fit while recruiting for playtesting.

2)Group your data.
Don't change your game based on only one play test/feedback. As a more in deep analysis based on aggregated data could show you it was not a pattern across that session - sometimes you just got a person that was not actually on the target you were looking for.

Vijay Srinivasan
profile image
Very good read, thanks a bunch !

Ian Hamilton
profile image
There's something critical missing from the 'always consider when recruiting' bit -

Disabilities.

Regardless of what segment you're looking at, people with disabilities (visual, motor, cognitive and hearing impairments) will account for a huge chunk, so you need to represent them in your recruitment profile.

Numbers depend for the most part on age. At the usual target audience age range for games it's around 15% for what's commonly regarded at disability, with another 8% of males who are colourblind, and 14% who have an adult reading age of below 11 years old.

And that's just amongst the general population, disability is actually more common (20% Vs 15% in PopCap's research) amongst gamers than in the general population.. PWDs have all the same reasons to be gamers as anyone else, plus extra reasons such as limited recreation and social opportunities, as an alternative to pain relief medication, and so on.

If you're testing with kids the numbers are smaller, with visual and hearing impairments in particular pretty rare in children (they're more commonly caused by deterioration accident or disease, none of which have had much chance to happen by the time you're 5 years old), so it's more motor and cognitive impairment that you want to test with for them.

If you're aiming at people who are older it increases pretty rapidly, the 15% becoming 50% by the time you hit 65.

Really helpful conditions to recruit for are colorblindness, dyspraxia and dyslexia, but really if you can just manage to recruit even one person from each of those four top level groups (motor/cognitive/hearing/visual) then you'll get some incredibly useful feedback.

Vin St John
profile image
Ian, these are some really great points. Thanks for sharing (and for the statistics to back it up).

Ian Hamilton
profile image
I'd be happy to chat more about in person if you're interested, will you be at the GUR summit?

Vin St John
profile image
Not sure yet, will let you know! My Gmail is "vinstjohn" if you would like to get in touch about it before then.

Ian Hamilton
profile image
Also I completely agree with the answers to the questions.

You have to test interfaces primarily through observation, as what people say and remember can be very different to what they actually do. There are some things that it can be helpful for asking questions on, if you're specifically looking for feedback for lasting impressions or emotional engagement, but you can't really ask questions after the event about how usabable specific elements of areas of the interface were.

Again in agreement the only way to get statistical significance is to run analytics post-launch, but that doesn't help you when you're in the early stages, small observational studies have lots of value for that.

The statistical significance thing is something that's really critical and often not understood, with people mistakenly believing that what they've seen in a small sample testing session is 'proof'. You need to be aware that not only are there small sample sizes but also an incredible number of uncontrolled variables, it's about as inexact a science as you can find. So instead it needs to be treated for what it is, a way to gather anecdotal suggestions of areas that could be worth looking into.

The way to mitigate against the inaccuracy is simple enough - test early and often.

Virgile Delporte
profile image
Very interesting article, with great tips. I would add something complementary: "Playtest early". Indeed, while playtesting prior to release is essential, providing playable prototypes as early as possible to a carefully targeted group should validate / invalidate / orientate future milestones within the game development process. Same methodologies as listed in your article - also one step ahead.


none
 
Comment: