The Psychology of Choice
February 6, 2002 Page 1 of 2
The play of any computer game can be described as a series of choices. A player might choose the left or right hand tunnel, decide to skip this target and save ammunition, or play a fighter rather than a mage. The total path of a player through the game is the result of a thousand little choices, leading to success or failure in the game and to enjoyment or dislike of the game itself. The principles underlying the choices players make and the way in which a designer can shape those choices is a key component of game design.
As in my previous article, the kind of psychology discussed here is often called behavioral psychology. This sub-field of psychology focuses on experiments and observable actions, and is a descriptive rather than normative field of study. Instead of looking at what people should do, it studies and tries to explain what they actually do. By understanding how people react to different kinds of choices, we can design games that help them make the kind of choices that they'll enjoy, and understand how some game designs can unintentionally elicit bad choices.
The most obvious thing to do when confronted with multiple options is to pick the choice or pattern of choices that maximizes reward. This is the sort of solution sought by game theory, one that mathematically guarantees the greatest level of success. While most players don't try to work out the exact algorithms behind weapon damage, they will notice which strategies work better than others and tend to approach maximal reward.
Usually, participants maximize when the choices are simple and deterministic. The more complex the problem, the more likely they are to engage in exploratory actions and the less likely they are to be sure that they are doing the optimal thing. This is particularly true in situations where the contingency is deterministic. If the pit monster attacks every time the player gets to a certain point, they'll quickly pick this up and learn the optimal point to jump over it. If it attacks probabilistically, the player will take longer to guess what rules govern the pit monster's attack.
While maximizing is the best thing for the player, it's probably not a good thing for the designer. If the player is doing as well as it's possible to do, it implies that they've mastered the game. It also means that the game has become perfectly predictable and most likely boring. A contingency with an element of randomness will maintain the player's interest longer and be more attractive. For example, subjects will generally prefer a 30 second variable interval schedule (rewards being delivered randomly between zero and sixty seconds apart) to a 30 second fixed interval schedule (rewards being delivered exactly 30 seconds apart), even though both provide the same overall rate of reward.
There is another, subtler problem with maximizing. As discussed in the previous article, sharp declines in the rate of reward are very punishing for players and can result in quitting. If the player has learned to maximize their reward in one portion of the game, creating a high and consistent level of reward, moving to another part or level of the game will most likely result in a drop in reward. This contrasting low level of reward is extremely aversive and can cause the player to quit. It may even be an effective punishment for exploring new aspects of the game, as the transition from the well understood portion to the unknown marks an inevitable drop in rewards.
To avoid maximizing, there are two basic approaches. First, one can make sure that the contingencies are never so simple that a player could find an optimal solution. The easiest way of doing this is to make the contingencies probabilistic. Massive randomness isn't necessary, just enough to keep players guessing and engaged. Second, the more options there are within the game, the more things there are to compare, the less likely it is that there will be a clear ideal strategy. If all the guns in the game work the same but do different levels of damage, it's easy to know you have the best one. If one gun is weaker but does area damage and another has a higher rate of fire, players can explore a wider variety of strategies. Once there is a clear best way to play the game, it ceases to be interesting in its own right.
Once there are multiple options producing rewards at different rates, the most common pattern of activity observed in humans and animals is matching. Essentially, matching means that the player is allocating their time to the various options in proportion to their overall rate of reward. More formally, this is referred to as the Matching Law, and can be expressed mathematically as the following equation:
Let's say our player Lothar has two different areas in which he can hunt for monsters to kill for points. In the forest area, he finds a monster approximately every two minutes. In the swamp area, he finds a monster every four minutes. Overall, the forest is a richer hunting ground, but the longer Lothar spends in the forest the more likely it is that a new monster has popped up in the swamp. Therefore Lothar has a motive to switch back and forth, allocating his time between the two alternatives. According to the Matching Law, our player will spend two-thirds of his time in the forest and one-third in the swamp.
The key factor in matching is rate of reward. It's the average amount of reward received in a certain period of time that matters, not the size of an individual reinforcer or the interval between reinforcers. If the swamp has dragons that give Lothar 100 points, while the forest has wyverns that give him only 50 points but appear twice as often as the dragons, the overall rates of reward are the same and both areas are equally desirable.
Now that I've set up a dichotomy between matching and maximizing, let me confuse things a bit. Under many circumstances, matching is maximizing. By allocating activity according to rate, the player can receive the maximal amount of reward. In particular, when faced with multiple variable interval schedules, matching really is the best strategy. What makes matching important to our understanding of players is that matching appears to be the default strategy when faced with an ongoing choice between multiple alternatives. In many cases, experiments show subjects matching even when other strategies would produce higher rates of reward.
Matching (and switching between multiple options in general) also has the helpful property of smoothing out the overall rate of reward. If there are several concurrent sources of reinforcement, a dip in one of them becomes less punishing. As one source of points falls off, a player can smoothly transition to others. A player regularly switching back and forth between options also has a greater chance of noticing changes in one of them.
Overmatching, Undermatching, and Change-Over Delays
At its discovery, matching was hailed as a great leap forward, an example of a relatively complex human behavior described by a mathematical equation, akin to physics equations describing the behavior of elementary particles. However, it was quickly discovered that humans and animals often deviated from the nice straight line described by the Matching Law. In some situations, participants overmatched, giving more weight to the richer option and less to the leaner option than the equation would predict. In others, the participants undermatched, treating the various contingencies as more equal than they actually were.
Neither of these tendencies is especially bad for game design, in small quantities. As long as the players are exploring different options and aren't bored, we don't usually care how much time they spend on each. Extreme undermatching implies the player isn't really paying attention to the merits of each option. Overmatching can mean that the player has chosen an option for reasons other than merit, such as enjoyment of the graphics.
Fortunately for behavioral psychology, these deviations could be predicted and controlled. One important factor in determining how closely participants match is the amount of time and/or effort required to change between options. The farther apart the options are or the more work is required to switch between them, the more players will tend towards overmatching. For example, imagine a typical first person shooter game, in the vein of Quake or Unreal. If switching from their current gun to a different one has a delay of 20 seconds during which they can't fire, they'll switch from one to another less often than they would otherwise. Even if the current gun isn't perfect for the current situation, the changeover cost might keep the player from switching. If the delay is long enough, switching can become non-existent as the costs outweigh any possible benefits.
At the other end of the spectrum is the case where changeover is instantaneous. Consider a massively multiplayer game where monsters spawn periodically in various locations. Switching between multiple spawning sites normally takes time, but suppose a player could teleport instantly from one to another with no cost. The best strategy would be to jump continuously back and forth, minimizing the time between the appearance of a monster and the kill. That makes sure the player gets as many points as possible in a given period of time.
Obviously, neither of these extremes is really desirable for game designers. Ideally we want to be able to adjust the time/difficulty/expense of changing strategies to strike just the right balance exploration and exploitation. What that balance is has to be an individual choice, the concept of a change-over delay is just a tool for achieving that balance.
Page 1 of 2