Today I want to share with you a design framework that I’ve been working on for a couple of years now with a team at Google’s Advanced Technology and Projects (ATAP) group, led by Aaron Cammarata. We call it “The Trust Spectrum,” and it’s a practical design lens for designing multiplayer games, particularly ones involving co-operative play.
Aaron led the charge on this project; he formed a group devoted to games that could enhance social connection, and asked me to help out on the game design mechanics side of things. He spent several months reading deeply into psychology and sociology to learn what the latest science said about human connections and social behavior.
In Aaron’s research on social structures, a few things popped out rather quickly.
Play is fundamentally social. Science used to believe that the reason play existed across the entire mammal kingdom was because it served as a form of practice for skills. But it turns out that if you separate and prevent tiger cubs, say, from playing, they grow up quite able to hunt, fight, stalk, and so on (this is from a study by Tim Caro cited in Play: How it Shapes the Brain). They pick up these skills in other ways. What they don’t learn to do — and can never learn — is how to get along with others of their kind. (This doesn’t mean that play doesn’t help skill-building; there’s plenty of science on that too). Play is fundamental to learning social skills, and social skills are a key survival trait because they are an evolutionary benefit; teams working together can accomplish things that individuals working alone cannot.
The variable here is trust. Human relationships progress through a set of stages which are pretty well understood. We start out tentative, trying to see what we have in common, then we gradually start relying on one another, and eventually come to trust one another. This is called “Social Penetration Theory,” by the way, and comes from actual experimental data. One thing that the data showed is that high trust doesn’t necessarily mean high trust in every facet of a life. Which leads to the fourth point:
I’ve written about trust at great length on my blog before. Most of it, however, was focused on the dynamics of large groups, the sorts of structures that emerge when trust is absent. The Trust Spectrum is about the opposite: it’s about trusted groups, and how you design for them.
Why? Because there’s a critical fifth finding, one that sits uncomfortably with the way we live our modern lives:
Virtual social bonds evolve from the fictional towards real social bonds. If you have good community ties, they will be out-of-character ties, not in-character ties. In other words, friendships will migrate right out of your world into email, real-life gatherings, etc.
-Koster’s Theorem
Now, I’ve worked for my whole adult life with online communities, and I know that deep trusting relationships definitely do form online. But they also, as is even enshrined in the Laws of Online World Design, migrate out from the virtual setting to the personal (it’s even called “Koster’s Theorem,” and no, it wasn’t me who named it). There are plenty of studies on this going back quite a ways. We’ve seen countless guilds start to have real life weddings, in person gatherings, and much more. You can climb the ladder of trust remotely, but to get to the most trusted relationships, and keep them alive, you need to meet in person.
OK, so how do we turn this into useful design rules?
As we were nearing the end of my time working with Google, Dan Cook published an article on Lost Garden (and later again on Gamasutra) that was the output of a workgroup at Project Horseshoe. It has enormous overlap with what you’re reading, uses many of the same scientific sources, and is very complementary — I highly recommend reading it. Even though Dan and I talk game design when we can and often say very similar things, we didn’t communicate at all on this topic! Dan’s checklist for “Game design patterns for building friendships” looks like this:
- Proximity: Put players in serendipitous situations where they regularly encounter other players. Allow them to recognize one another across multiple play sessions.
- Similarity: Create shared identities, values, contexts, and goals that ease alignment and connection.
- Reciprocity: Enable exchanges (not necessarily material) that are bi-directional with benefits to both parties. With repetition, this builds relationships.
- Disclosure: Further grow trust in the relationship through disclosing vulnerability, testing boundaries, etc.
All of these pieces of advice are dead on. Most of them are about moving from strangers to friends; in other words, moving from the “orientation” stages to “exploratory affective,” from outsider to within the 150. Dan has many examples of specific mechanics that accomplish these things, in his article. Our objective, however, was slightly different from the workgroup’s. We wanted not a catalog of mechanics, but something we could measure, ways to deeply break down a game and target it precisely at an audience. Ideally, you’d be able to assess a game, and give it almost a “trust rating,” or perhaps even a spread saying “this game works from this level of trust to this level.”
We did in fact arrive at tools to do that, but I am going to walk you through the process we used to get there, because it’ll probably help understand to it more deeply.
The goal here is to design games that fit with how humans actually interact socially. This means two things:
Designing multiplayer games that make sense for particular levels of trust. In other words, a game might fail if it calls for very high trust, but people tend to play it with strangers or in large groups. Think of the issues that so many players have with pick-up groups in MMOs; think of the endless articles out there on whether to play Diplomacy with your best friends.So, one would think we could just start putting game mechanics into tiers, and indeed that was how we started out, with a list of over 200 game mechanics that we tried to sort into buckets. We used “vulnerability” as our basic sorting function, since trust is driven so strongly by the level of implied risk if trust is broken. We also just started designing games that were meant to go at different levels.
It didn’t work.
The design problem with breaking game mechanics into trust levels is that virtually all games are actually played at all levels of this spectrum; meaning, you can play competitive games with friends or strangers, a bidding system or supply chain system may exist at any point on the spectrum, etc.
Games designed specifically to leverage elements from different levels of the trust spectrum do have different sorts of characteristics, though. So a more fruitful approach is to ask, what games cannot be played unless at a given level of intimacy or trust?
For example, games exist wherein there is an implicit level of trust that permits good play. All team coordination games are of this sort. In many, there are implicit trust maneuvers which can only be carried out by the assumption that the team member will be positioned correctly in order to assist. Examples might include doubles tennis, or volleyball, particularly two-person beach volleyball. “Blind” moves where trust alone completes the pass is one of the most common features of high end team play in sports: the initiator of the move passes the ball to empty air, and simply trusts that the receiver will move into position at the exact right time.
At the highest levels of trust exist games of complex coordination involving physical risk, such as that trapeze example, or certain forms of dance, where a failure in trust will result in a significant loss. Think of ice dancing and pairs figure skating, where you have bodies in motion on hard unforgiving surfaces, literally throwing your entire body weight onto someone else who is precariously balanced on moving sharp blades which sometimes swing close to your eye. Heck, sometimes you have to grab said moving sharp blade. Talk about trust falls!
Similarly, there are games where deep knowledge of other players’ thinking patterns and knowledge base permits better play. Examples include games of limited communication, such as charades or bridge. Limiting communication allows people who know each other well to communicate using shorthand of various sorts.
A lot of these games rely on rote “plays” or extensive repetitive training, so that players are used to being in their appropriate positions and roles. This is something that turned out to be a powerful idea – subgames where people have to practice certain maneuvers, but when connected with other players’ maneuvers, they result in a beautiful dance.
A different level of trust is that manufactured purely by fixed roles, where the game effectively implies a sort of economic exchange between players. Many team games have this sort of thing going on – one thinks of soccer, for example, or basketball. These do feature implicit trust moves and blind passes and the like so they stretch to high trust. But they are functional (meaning, you can play at all) at much lower trust levels, because the role is almost parallel play, and it’s a lower trust exchange. Examples here include stuff like throwing in the ball from the sidelines in soccer.
An additional interesting observation is that in practice, most advanced social mechanics are actually about working around a lack of trust among participants. For example, LARPing systems must involve elaborate consent mechanisms and conventions; large-scale economic play in MMOs involves contracts and supply chains and other such things that are meant to ameliorate the fact that trust does not, and cannot, exist at large scales.
Using this as a rough rubric, then we started to classify some broad groups of mechanics as higher or lower levels of trust. As source data, we surveyed gamers of all sorts with the assistance of Quantic Foundry, including several types of digital gamers as well as tabletop players. Quantic Foundry is a market research company focused on gamer motivation. They have developed an empirical model of 12 gaming motivations based on research from over 350,000 gamers. We teased out similarities and differences in play patterns across casual games, lightweight party games, pen and paper RPGs, online multiplayer shooters, clan-style mobile games, MMORPGs, and more.
Some of the findings that are worth mentioning:
There were also a few clusters that appeared that gave a new lens on “casual” and “hardcore” via social trust.
In digital we saw
In analog we saw something with striking similarities and differences, which likely speaks to the culture of online games:
Everyone likes immersion, everyone likes winning and being competitive. As Emily Greer has pointed out, the definitions of casual and hardcore, or genre preference, are often presented as gendered when playstyle preference isn't. But it was pretty clear that trust is a very important component in whether someone plays games casually or more as a hobbyist, and that it’s an important element in thinking about games with friends versus strangers, particularly for women playing online.
Using the types of features people preferred, the play patterns and audiences for specific game types, and the demographic information on the players, we ended up with a rich data set that let us start to draw empirical conclusions at a high level that cut across every sort of game regardless of platform or physicality. As examples:
Low trust games tend to feature solo activity and parallel play, the mechanics tend to offer verification (think contracts, exchange, bidding), and they focus a lot on identity systems and status. Obviously, competitive games tend to fall in this range. The game of Diplomacy actually relies almost entirely on the idea that contracts aren’t enforceable.
Acquaintance games start to offer low end co-operation. Classic party games like Charades, Pictionary, Apples to Apples, Cards Against Humanity, and the like are all about getting to know other people and how they think; they offer specific roles that players take on that make boundaries very clear. Players help each other, though competition is still a feature. Teams may be a requirement, but teammates don’t need to practice much. Instead, simple reciprocity is all that’s called for. And in many games we see contracts, supply chains, light touch economic interdependence, elections voting systems, and other forms of mediated trust. This is where all that social architecture in Star Wars Galaxies lived, for example. Rock Band is a great example of a game that sets up roles with tight boundaries, limits how much damage you can do to your partners, and basically sets up contracts between players.
Games that call for friends start demanding you to know your partners well indeed. Non-verbal communication, incomplete information, blind moves, synchronous coordination, and so on. Bridge is a an excellent example among card games. Actual musicianship calls for this, when playing in a group; unlike Rock Band, you don’t have a handy screen telling you what everyone else is about to try to do. You start getting games where an individual’s failure means the team fails. Here is where something like Overwatch is thriving.
True high trust games start featuring permadeath mechanics (the closest we can get to physical risk in a non-physical game), prisoner’s dilemma, self-sacrifice for the good of the group, and so on.
Diving deeper into these makes us realize that what we are seeing is actually several complementary axes. For example, awareness of game state is something that seems to pretty cleanly map on an axis correlated with trust. The less visibility you have into what your team is doing, the more trust that is required. This is a better way of thinking of the issue than trying to fit “fog of war” and “1st person camera” into a rubric. So a higher level of abstraction was needed.
From the PARC PlayOn paper “Alone Together?”
We took the step back and realized that we needed to define some terms that would help us think more clearly about all this.
Let’s start with the notion of parallel play. There are two common usages of this, and they both basically mean the same thing.
But in games of trust, we’re talking about a range of games where competition is only at the lowest end of the trust scale. At higher levels of trust, we are discussing players on the same team playing with varying degrees of parallelism. In short, parallel play in the childhood education sense is pretty low trust; the footrace is lower still, because it’s actively competitive. The higher you go in the trust spectrum, the less parallel play you see — though as designers, we should be careful never to eradicate it altogether, because if you do, you lose that casual accessibility.
In order to get less parallel play, we have to take the standard terms for symmetric and asymmetric games and apply them to the players on one team. Essentially, symmetry is what permits maximal parallel play. We can think of it as whether players have identical or different capabilities.
In regular head to head games, symmetry and asymmetry are easy to see. Go, Chess, Checkers, Monopoly, Scrabble: all symmetric. And indeed, this is by far the commonest mode in tabletop games. Fox and Geese is asymmetric; there’s one fox on the board and a bunch of geese, and the two sides have different win conditions. Hnefatafl is asymmetric. In videogames, we see the reverse thanks to one of the players being an AI; Pong is symmetric, but in Space Invaders you play against a computer who marshals very different resources from your own.
In team-based games, we have to start thinking within the team.
Think of a basket of verbs. In the case of our hypothetical footrace, all racers have the same basic verbs they can perform: running, breathing. We have symmetry at the verb level (though not at the statistical level — players all have different speed, aerobic capacity, etc — and not at the tactical or strategic level, and of such things is the contest made).
Contrast this to a game of soccer or hockey, where we immediately see that players are not symmetric; we have positions on the team. The goalie in soccer is allowed to touch the ball with their hands; nobody else on the team is, so there’s an example of a unique verb (“catch”) that only one position has.
Asymmetry is complicated. Because there may be quite a suite of verbs available to players as a whole, and some verbs may be available to only one, to several, or to all of the players on a team, we have to think in an abstracted way about symmetry as well. Further, many games — think of soccer again — might actually have the same verbs available to people who are playing different positions, and therefore serve a different purpose on a team.
We landed on two terms. A role was defined as a strategic approach to play. This is the difference between a defense player and a striker in soccer; they have the same set of verbs available to them if you think in terms of verbs as only being “things you can tie to outputs” in the low level sense that I usually use in game grammar. A defense player and a striker can both run, they can both kick the ball, neither are allowed to use their hands.
But games are fractal, made of games. If you “pop the stack” a little, and think about the game one step out from the verbs you input, at the level of dynamics, if you like, you can see that “attack” and “defend” are also verbs, they just exist at the tactical level. (Side note: this is actually how I encourage people to use game grammar diagrams; mapping down at the verb level is an extremely useful tool for detailed analysis, but most structural problems with games are at the tactical level, not the verb level).
A role is distinct from the other term in extremely common use, which is class. A class is best thought of as a fixed basket of player abilities. Goalie is a class as well as a role. A cleric or a paladin are classes; but either one may take on the role of healer. Classes are way less common in sports than in other types of games such as tabletop or digital, because of course human bodies tend to all have the same affordances. Any classes in sports therefore have to be created by the game’s rules. In baseball, for example, pitcher is a class, catcher is a class. They have special “abilities” no other player on their team has.
Classes, roles, and parallelism give us ways to look at the spectrum of interaction in a cleaner way; but there’s one last thing that we have to account for, however, which is that many types of coordination problems involve resource allocation. And again, it’s best to think of this in abstract ways. Two defenders in soccer are allocated different areas of the field to cover; in soccer, physical space is literally a resource pool to consider. In a relay race, each racer is actually allocated a segment of the race. In other games, it might be how many potions are in the guild bank, or some other sort of limited resource.
Now, as you might guess, looking at each of these gives us differences in trust based on how much of that feature is present. Lots of parallel play, few roles, no classes, independent resources, and you’re down in low trust land. Start adding many roles but keep the other factors the same, and you don’t hugely affect the trust level, but you broaden the game’s audience (more roles equals more playstyles which equals broader appeal). Push roles all the way to classes, and you have reduced accessibility because it means the game isn’t playable without a full team — which is hard to get coordinated but also means the team has to practice with one another to feel competent.

On the other hand, if you did add those classes, but made it so they actually served a smaller number of roles, and the things the roles could do had lots of overlap, then you’re basically enabling trust to exist at either level. Medium trust players and maybe even low trust players could form ad hoc teams and have lots of duplicate role coverage on the team. This reduces the amount of trust they need to have in one another. Advanced teams might actually choose non-overlapping specialists, which means tons of dependency on your teammates. It also means that the game likely has a higher skill ceiling, meaning there is depth there for players to master. So you can ameliorate some choices by pushing one of the other variables in the other direction. It’s the interaction of all of them that ends up shaping the trust level for the game.
In short, you end up with some sliding scales.
That’s all very abstract. Let’s get concrete with some examples of well-known games.
Lots of shoot-em-ups let two players fly their ships at the same time. Raiden is one classic example.
All in all, Raiden evaluates out as pretty low trust. The presence or absence of another player doesn’t make that big a difference to you as a player, and what synergies arise from teamwork are entirely optional.
Gauntlet adds some much more rigid stuff into the mix, such as classes. But when we look at the verbs, we find that the classes aren’t that significantly different from one another. All four classes in the original game have melee, ranged, and magic attacks, so there are no verb differences. Instead, the real differences are statistical: movement rates, damage taken, damage dealt, magic strength, and so on. Again, there are pickups on the field, the most important of which is health (since all players are on a constant timer ticking down). However, a number of subtle design choices make a massive different to the trust spectrum.
I’ve mentioned trapeze multiple times here, and as you might guess, it’s very high trust.
A key thing about trapeze, then, is that it doesn’t really have much of an on-ramp. It’s hard to find low-trust trapeze! (Well, more than once anyway…) This may be why traditionally it’s been done by families, where high trust relationships already exist.
This is what makes games like soccer so interesting. Soccer is one of the most democratic games in the world. Its variants range from street soccer played with a ball of twine or tied up t-shirts, with a couple of rocks serving as goalposts, all the way up to the sort played in stadiums. At the low end, street soccer is often played one on one, with no team at all! Garage doors make for a great goal that provides a “crossbeam,” if you decide to play two on one; one player acts as goalie in front of the garage door, and the other two try to score against them in a two-team asymmetric game. And of course, there’s the common sight of a host of smaller kids who haven’t learned team coordination “playing alone together” and all chasing the ball around on a larger pitch.
Soccer therefore has a very low trust on-ramp for players who want to start learning basic skills. But at the high end of play, soccer exhibits a whole bunch of high trust traits.
There are many sports that exhibit some of these characteristics. Basketball, volleyball, and American football all rely on the same sort of blind passes. It’s worth thinking about why soccer is the most popular of all of these, worldwide, and some of the answer may rest in the game’s breadth across the trust spectrum (as well as the simplicity of the equipment required). Football has far more rigid roles in the mix — one reason why it’s fundamentally a less accessible game. Football is harder to play without a larger group, because of the dependency on specific roles — for example, at the moment of hiking the ball. Volleyball actually can’t function without high trust because it’s so dependent on blind passes (which call for a good amount of skill to boot), and the roles are way more rigid. In volleyball and basketball, there are rules that basically affect how long an individual can maintain control of the ball, which forces coordination and therefore trust (one-on-one basketball may or may not bend the rules here a bit, and of course, the NBA itself has progressively bent the traveling rule over the decades, reducing the amount of teamwork required in the game).
We might therefore plot all these games on a chart and see that they have a trust range to them.

One exercise we found helpful was actually to map individual features onto this range. It is capable of showing you a potential design flaw which none of the above games have — a broken line. See how soccer stretches seamlessly across a range? How other games have narrower ranges, but still cover a gamut? Well, it’s entirely possible (we did it, by accident) to create a game that has a set of low trust features and then one or more high trust features, and nothing in the middle. And if you don’t have the stuff in the middle, then the game is actually effectively broken. There’s no way for players to move through the levels of trust formation with teammates if there aren’t game features that let them do that.
Some basic things this suggests as rules of thumb, and which we were able to compare back against our research data and validate:
This, however, while useful, is still not detailed enough for us to really start designing with. Just as an example, one of the classic challenges in co-operative game design is what is called the Quarterback Problem. This description from BoardGameGeek is a good overview of it:
Many coop games are superficial, in reality being nothing more than a single-player game under the management of a group. As a result, there is plenty of space for the dominating player who knows the game best to become that single-player, orchestrating their plan – like a puppet-master – through the rest of the group.
Pandemic is an example of a game that is often criticized for being vulnerable to this design problem. As one might guess, football itself is of course a poster child for it; the roles played by most of the team don’t have many interesting decisions to make. The trust spectrum above doesn’t (yet) give us the tools to address that issue. And yet games ranging from Killer Queen to Spaceteam don’t seem to have it.
In looking at the ways those two large axes of parallelism vs asymmetry and shared vs individual resources manifest even in those examples above, we see that we need to break the framework down into even more detail.
These all start arising organically from one another, as you can see, but they are deeply intertwined and mutually dependent. You quickly end up with a table sort of like this:
| No trust | Low trust | Medium trust | High trust | ||
| Forms of parallelism and asymmetry | Teams | Probably don’t exist | May exist, ad hoc | Exist | Exist and tend to persist |
| Roles | Everyone can do the same things | Overlapping roles let players supplement one another | Increased dependence | Classes or other forms of high dependency | |
| Agency | Control over self | Can cooperate | Connected moves, combos with other players | Moves made on behalf of other players | |
| Success metrics | Individual metrics | Mix of individual and shared metrics | Shared metrics only | Self-sacrificing success, where teams win but individuals may suffer | |
| Types of shared resources and liabilities | Risk | Light or no loss from a trust fail | Lose incremental status only; relatively painless | Tough penalties for trust fails | “Permadeath” style penalties, calling for perfect team coordination |
| Information | Perfect information on stats and resources | Imperfect information | Imperfect info and/or time pressure causing cognitive load | No information on shared pools and resources, nor on teammate status | |
| Exchange | Secure trade, code-verified systems | Gifting or other altruistic actions, to drive reciprocity | Synchronous exchanges | Blind giving of one’s own critical resources without knowing recipient state | |
| Time | Low time pressure for coordination, asynch gameplay | Escalating time pressure | Synchronous real-time play | Synchronous, in-person play, often with high time pressure | |
| Ownership | Separate | Separate and shared | Shared pools | Shared pools but only some teammates can access |
Further, we can take some of our example games and look at them using this rubric, and see exactly how they play out:
As expected, the detailed breakdown shows why Raiden can spread across low trust and no trust at all. The two player version is a lightweight introductory game suitable for people who don’t know each other well, and there’s only so much you can do to coordinate.
| No trust | Low trust | Medium trust | High trust | ||
| Forms of parallelism and asymmetry | Teams | Ad hoc | |||
| Roles | Overlapping roles let players supplement one another | ||||
| Agency | Control over self | ||||
| Success metrics | Individual metrics | Mix of individual and shared metrics | |||
| Types of shared resources and liabilities | Risk | Light or no loss from a trust fail | |||
| Information | Perfect information on stats and resources | ||||
| Exchange | Secure trade, code-verified systems | ||||
| Time | Escalating time pressure | ||||
| Ownership | Separate |
Here we can see that soccer supports more than one trust level to play at within the same axis, which is what gives it such accessibility and a high trust range. You can play soccer with teams or without, and the teams might be high trust and high investment things like playing for Barcelona, or might be a pick up game in the street. You can play in ways that ignore other players, or play at high levels of coordination. You can pay more attention to your own stats, or to how the team does. And so on.
| No trust | Low trust | Medium trust | High trust | ||
| Forms of parallelism and asymmetry | Teams | Probably don’t exist | May exist, ad hoc | Exist | Exist and tend to persist |
| Roles | Overlapping roles let players supplement one another | Increased dependence | Classes or other forms of high dependency | ||
| Agency | Control over self | Can cooperate | Connected moves, combos with other players | ||
| Success metrics | Mix of individual and shared metrics | Self-sacrificing success, where teams win but individuals may suffer | |||
| Types of shared resources and liabilities | Risk | Tough penalties for trust fails | |||
| Information | Imperfect info and/or time pressure causing cognitive load | ||||
| Exchange | Blind giving of one’s own critical resources without knowing recipient state | ||||
| Time | Synchronous, in-person play, often with high time pressure | ||||
| Ownership | Shared pools |
Well, it’s basically terrifying.
| No trust | Low trust | Medium trust | High trust | ||
| Forms of parallelism and asymmetry | Teams | Exist and tend to persist | |||
| Roles | Classes or other forms of high dependency | ||||
| Agency | Connected moves, combos with other players | Moves made on behalf of other players | |||
| Success metrics | Self-sacrificing success, where teams win but individuals may suffer | ||||
| Types of shared resources and liabilities | Risk | “Permadeath” style penalties, calling for perfect team coordination | |||
| Information | No information on shared pools and resources, nor on teammate status | ||||
| Exchange | Blind giving of one’s own critical resources without knowing recipient state | ||||
| Time | Synchronous, in-person play, often with high time pressure | ||||
| Ownership | Shared pools but only some teammates can access |
As you can see, these detailed breakdowns line up very well to the high-level trust ranges that were in the earlier trust range image.
If you’re just aiming for a game at a particular trust level, the advice is easy:
Making a high trust game isn’t hard. Here’s an example. Let’s say you have a fast-paced game where your health always goes down. You can’t raise it yourself. What you can do is give some of your own health to someone else. Let’s also say there is stuff to pick up that adds health back. But there’s also something hunting you — a doppelganger that instantly slays you if it touches you. And you can’t fight back. A teammate can easily dispose of your doppelganger by touching it, but the doppelganger respawns fairly quickly. But every player has a doppelganger too… and dying puts you out of the game entirely. If all the stuff to pick up is grabbed, you all go to the next level, and all come back into the game. If even one item is left to collect when the last player is touched by their doppelganger, you all lose.
What we have is a system where
I could go on… this is a high trust game, and can be amazing to watch and terrifying to play. You can already picture the moment when a player with almost no health left throws themselves in front of a doppelganger to sacrifice themselves, allowing the last player to just barely touch the last required item to beat the level. Self-sacrifice for the good of the team, altruism, tight coordination under time pressure… it’s all there. I know, because we built a couple of games very much like this, and they were fun almost immediately, generating all the sort of social responses we wanted: cheers, groans, emotional support when someone was caught in a tough situation, valiant self-sacrifices, recriminations when a teammate let you down, and all the rest.
But a pure high trust game is also incredibly inaccessible, because a novice team will screw up quickly and perish in ways they don’t even understand. In fact, one of our prototypes was so fragile that adjusting the size of the map broke it completely, because it increased time pressure to the degree that we couldn’t even play effectively.
So yeah, it’s easy to design a high trust game, once you are in the mindset. What’s hard is designing for high and low trust at the same time. Something like soccer, a game that has an incredibly wide spread, starts introducing real complications precisely because you are working with multiple trust levels at once. You have to supply mechanics that work in roles or solo, classes and parallelism, shared and individual success.
The solution to this is to be very very careful and analytical about your interdependencies. Here are some rules of thumb we uncovered through prototyping:
We built prototypes that proved these rules of thumb out too. But we also found that it was quite a tough design challenge, and as we worked on it, it was easy to break the intricate relationships. Among the mistakes we made:
So, these days, of course, there’s metrics systems. All this stuff you’ve been reading seems highly… theoretical. Is there a way to actually measure these behaviors? It’s easy to measure things like conversion, retention, or number of friends or even time spent playing between a given pair of players. But that doesn’t give you actionable things you can apply to the design; as we have seen, any number of things in the game may end up greatly affecting where the game lands in terms of trust. What we would need are objective metrics that we can apply so that we aren’t looking for the why of things, but instead the how.
The challenge is that different games can build radically different sorts of trust mechanics; ultimately, each game’s developers must make a judgement call as to which metrics fall into which buckets for the purposes of data gathering.
Here’s one way to measure all the foregoing, with simple tags that should slot easily into existing game metrics systems. I’ll give a list first, then explain them all:
| Action | Description | Data |
| Assistance | When a player helps another | Both player ids, type of assistance (heal, gift, possibly including gift id), timestamp. |
| Exchange | When a player trades help with another | Both player ids, object exchanged, timestamp. |
| Coordination attempt | When a player tries to coordinate | Both player ids, coordination type (pass, combo name, etc), timestamp, and an ad hoc event id. |
| Coordination completion | When coordination succeeds | Event id it completes, time elapsed to completion |
| Non-exclusive role action | When a team player performs an action only someone in their position could do | Id, action tag so you can identify different actions |
| Redundant role action | When a team player performs an action only some team players can do | Id, action tag |
| Exclusive role action | When a team player performs an action only they could do | Id, action tag |
| Betrayal | When a team player performs an action against the team | Id of betrayer, id of betrayal target if it exists (e.g. friendly fire), ids of everyone on the team who was betrayed, action tag |
| Played with | Logged when you engage in multiplayer play with another player | Id pairs, session length |
| Team fail | Logged when a definite action such as scoring a point is logged against a team | Ids of all players on the team, action tag |
These metrics are chosen because they are most easily detectable in code without being bypassed by verbal communication (a form of coordination we cannot capture). Note that the boundaries of these events must be policed fairly tightly, as there is the possibility of overlap.
Once you have these basic metrics, it’s easy to start building a layer of more sophisticated ones on top by tracking history. For example:
The core concepts used in these are the notion of a player’s statistics versus actions.
Statistics here are broadly defined to include actual values on a player object, plus all objects or equipment in the player’s possession, etc. A player’s health, ammunition, or gear all count as statistics.
Actions are defined as commands a player gives to the system. This may include state changes on the player, such as receiving a ball in a pass – and it of course include actions such as initiating the pass.
Some actions are deeply contextual, and may be hard to identify. For example, a defender blocking a shot on goal is performing a “defend” action. It is up to the game developers to attempt to identify when this action has occurred.
Other actions are explicit commands, such as “shoot the ball.”
Roles are defined as collections of actions that can be performed by a player. Roles really come in two flavors: players may be equipped with a set of commands that are explicit commands, and these might vary by player.
If the action is something that any other given player on the team would be able to do if they were in that position, then it is a non-exclusive role action. If the action can only be performed by this player because of the player’s intrinsic capabilities, then it is an exclusive action.
In the example above, any player can interpose themselves between the goal and the ball. This is a non-exclusive action; any player, thanks to their location, may be in the role of defender, and “defend” is an available action in that position.
Conversely, only the goalie can catch the ball in their hands. This is an exclusive action; no other players are allowed to touch the ball with their hands.
In an RPG, a mage is capable of an exclusive role action: they can cast a spell. You might have multiple mages, however. In this case, you have one other distinction, which is redundancy. A very tightly defined team may have no role overlap whatsoever. A less rigidly defined team might have role overlaps.
There is one more type of action, which is an action that any player can perform at any time whatsoever, regardless of circumstance. These sorts of actions do not need to be tracked for trust spectrum purposes, but you might want to log them as universal actions, just so you have a baseline to compare the other sorts of actions against.
Teams are defined as formal, code-recognized groupings of players. They may not feature in free-for-all games at all, or may form briefly and on the fly (for example, in a game of tag, there are two teams at all times, but membership changes during the game).
If you take a game and just note the expected frequency or necessity of these actions, you should end up with a decent sense for where the game will fall on the Trust Spectrum. A preponderance of exclusive role actions and high expectation of coordination completion will mean that you are demanding a high trust level. You should even be able to see well-functioning teams versus ones that haven’t gelled, based on completion successes, low betrayal, and low incidence of team fails.
Using this, you can then actually tune your game or its marketing, in order to best match up the play and the audience. You can also provide a better matchmaking system, grouping players that track to similar levels on the Trust Spectrum, or teams that function at the same level.
Designing games for trust is also designing games for human fulfillment. It is designing for happiness. It’s designing in ways that are #good4players and their relationships. Play is Love. We hope that these tools help you go out there and bring more of it into the world.
Aaron Cammarata will be giving a short, ten-minute talk, scheduled for 11:37 on Monday morning (3/19/2018) at GDC 2018. It’s part of the Innovation and New Platforms section of the Google Developer Day. It will be streamed live: https://events.withgoogle.com/google-gdc-2018/live-stream/#content
Building a better multiplayer
Join us for a fresh look at multiplayer game design. We introduce the Trust Spectrum – a design lens developed at Google that unlocks the potential to create meaningful human experiences. Based on decades of social science, it helps you build great games that support healthy connection, friendship, and trust.
The topic of cooperative play has been quite an active subject in the last few years in game design circles. In particular I’d like to point you at
There’s far more than I can possibly point to on the psychology and sociology of all this. Dan’s article has an excellent reading list. Several papers and sources were already linked throughout the article. We also have this list of fairly accessible entry points into the enormous literature on play and social behavior:
Special thanks are due: this (massive) article would not have been possible without Aaron, who shared his vision for games that brought people together, identified the core problems, built a team to tackle them, and provided the space for us to all work on this problem for an extended period of time; and of course, to Google who made that possible.
Three separate development teams (HookBang, 20after1, and Schell Games) created prototype games demonstrating aspects of trust spectrum design with the able assistance of Aaron’s team at Google and Massive Black, and in particular I’d like to call out the folks who were in on the project early and helped define the core attributes of trust spectrum play: Brian Clark, Jason Leung, Noah Falstein, Justin “Coro” Kaufman, Melissa Lee, Frank Roan, and Christina Davis. The early wrestling with nitty-gritty trust spectrum design in prototypes fell heavily on (in rough chronological order) Tony Morone, Sean Vesce, and Justin Leingang. We had invaluable research help from several sources, and I want to specifically mention Nick Yee and the rest of the folks at Quantic Foundry, who helped us in our analysis of extant games out in the world that we used as source data to refine our model.