Radical Entertainment’s Marcin Chady packed a whole hours’ worth of information into his 25-minute talk on “Tapping the collective creativity of your team" at GDC 2012.
Primarily an elaboration of the comprehensive in-house feedback and rating tool developed by Radical Entertainment for use on Prototype 2, Chady opened by arguing that all studios “could be doing a better job at using [their] teams passion and creativity.”
There is a paradox at the heart of game development, Chady said, with team members expected to be passionate about all aspects of games and game design. However, once they land in their role individuals tend to become highly specialized, with little impact on other aspects of the game.
Chady wanted to figure out how to enable his whole team to influence the game beyond their own areas of expertise.
Traditionally, if a team member has an idea, a suggestion, or a critique they have to weigh up the potential value of their idea against the personal cost of pitching it and the risk of being dismissed, with the most common result being that it becomes not worth “making a fuss.”
But should we care about these lost ideas and potential? After all, Chady said, design is not a democratic process and the last thing anyone wants is a game designed by committee, appealing to the lowest common denominator. However at the other end of the spectrum, when there is no creative input from others, Chardy argued, games run the risk of encountering the “ivory tower syndrome,” finally realizing they had been “polishing a turd.”
Chady suggested that, like most things, “the answer is somewhere in the middle” with the trick being to “find the right balance.” With the goals of retaining independent creative decision making while also helping team members make informed assessments of new ideas, give a home to suggestions and engage QA teams more in the game-making process, Chady and the team at Radical created a studio-wide tool to facilitate feedback on specific areas of the game.
The process began by breaking down the game into a hierarchical tree of features and mechanics, and allowing team members to leave direct feedback on mechanics like the workings of the “blade,” “claws,” “Hunter,” or “Hydra” enemies. The tool also required individual commenters to give mechanics a rating out of 100, giving the person or team in charge of that mechanic clear and unambiguous feedback.
Any member of the team could leave feedback, as well as sign up to receive email notifications whenever feedback was left on any mechanic through the game. If another user had already left feedback on a mechanic or system, team members could also indicate they “agreed” with certain comments, giving that issue more visibility and weight.
The system was embedded into the core development tools, and feedback could be filtered by version or milestone with old comments or rating able to be quickly skipped over, so current problem areas could be seen clearly and addressed.
Some team members used the tool extensively, and to control these over-eager commenters only one comment was allowed per feature per person, and each person can only leave one rating for a mechanic.
Chady felt that it was important to make employees know that their feedback was being read and that they were being listened to. Feature owners could provide responses to feedback, and decided whether or not to add a reply.
Initially the tool met with some resistance, with some team members suggesting that it presented an “unnecessary distraction,” or that decisions should be “left to professionals” in each area. Others felt the tool was “too technocratic” and suggested a change in “attitudes” instead, and there was even some concern about the name, initially called “BetterCritic.” It was changed, appropriately enough in response to feedback, to simply the “Feedback Tool.”
Chady stressed the importance of getting a commitment to the tool and the new collaborative process it facilitated from the very top.
Chady presented some statistics, saying that between Oct 2010 and Oct 2011 the tool saw 4,000 operations, with on average 10 per day, or roughly 30 comments and ratings provided per person. The tool saw a huge amount of interest and activity, and despite the fact that half of all feedback was left by 16 enthusiastic team members, Chady showed off a typical “long tail” graph which indicated that nearly everyone on the team used the tool at least once throughout the development process.
Predictably, the opening missions and cutscenes received a high percentage of all feedback, but two core mechanics, targeting and flying, both received extensive comments, suggestions and ratings.
The tool was also useful for team leads, as it was found quickly that lots of “polarized opinions often indicated an interesting gameplay mechanic” and became a catalyst for brainstorming and further ideas.
According to Chady, the tool took on a life of its own, with ideas for tool and tech improvements, as well as ideas for the next game coming towards the end of the production cycle. The tool also spawned comments on aspects as diverse as studio overtime, food and air conditioning, leading to a “water cooler effect” where ideas were free to interact.
In a powerful example of the tool’s multiple uses, Chady also mentioned occasional “catalyst” topics that would sometimes trigger “an avalanche of pent up frustration” revealing issues that had gone unspoken or unaddressed and allowing them to be dealt with.
One issue raised was with the way the credits were going to be dealt with – early on it became clear that team members were unhappy with the plan for the credits, leading to a feedback tool-initiated review, ending up with a completely different credits system.
The tool wasn’t just a unanimous success however, as there were issues initially with the UI, feature owners often relied entirely on email notifications to receive their feedback, and many team members didn’t like the scoring, deeming it “ominous” or “meaningless.”
In defense of mandatory ratings, Chandy argued that they were not an approximation of Metacritic, but a useful component of the feedback, allowing them to answer general questions like: “Is the game score going up? What’s dragging it down? Is attention to this particular area paying off?” and adjust team focus accordingly. It also made feedback comments more readable and organized, giving them the ability to sort out the highest rated ones and see “main issues.”
Going forward Chandy mentioned a few improvements they wished to make, including more feed forward elements, allowing feature owners to pre-empt suggestions that had already be discussed and pre-empt feedback that had already been addressed. The UI was a clear area in need of improvement, and Chandy mentioned the addition of data mining tools could also be a boon for the team. As an answer to the team members who disliked the mandatory ratings, he mentioned the possibility of simply marking feedback as positive or negative.