This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
There are a few special cases in the study of contingencies that deserve special mention. First, there are "chain schedules," situations where there are multiple stages to the contingency. For example, players may have to kill 10 orcs before they can enter the dragon's cave, but the dragon may appear there at random points in time. These schedules are most commonly found in multi-stage puzzles and RPG quests, and people usually respond to them in a very specific way: they treat access to the next stage of the schedule as a reward in itself. In the example just mentioned, most players would treat the first part as a fixed ratio schedule, the reward being access to the subsequent variable interval schedule.
Secondly, there is the question of what happens when you stop providing a reward, which is referred to as "extinction." Say the player is happily slaying the dragon every time it appears, but after a certain number of kills it no longer appears. What will the player do? The answer is that behavior after the end of a contingency is shaped by what the contingency was. In a ratio schedule, the player will continue to work at a high rate for a long period of time before gradually trailing off. In a fixed interval schedule, their activity will continue to peak at about the time they expect to be rewarded for a few intervals before ceasing.
As a general rule, extinction involves a lot of frustration and anger on the part of the subject. We expect the universe to make sense, to be consistent, and when the contingencies change we get testy. Interestingly, this is not unique to humans. In one experiment, two pigeons were placed in a cage. One of them was tethered to the back of the cage while the other was free to run about as it wished. Every 30 seconds, a hopper would provide a small amount of food (a fixed interval schedule, as described earlier). The free pigeon could reach the food but the tethered one could not, and the free pigeon happily ate all the food every time. After an hour or so of this, the hopper stops providing food. The free pigeon continues to check the hopper every 30 seconds for a while, but when it's clear that the food isn't coming, it will go to the back of the cage and beat up the other pigeon. Now, the interesting thing is that the tethered pigeon has never eaten the food and the free pigeon has no reason to think the other is responsible for the food stopping. The frustration is irrational, but real nonetheless.
This simple experiment illustrates the "avoidance" principal.
A related phenomenon, called "behavioral contrast," occurs in chimpanzees, among other species. A chimpanzee is doing a simple task such as pulling a lever and is being rewarded with pieces of lettuce, which they like to eat. After doing this for a while, one pull is rewarded with a grape, which they really love to eat. On the next pull, the chimp is given lettuce again and they get very upset, throwing the lettuce at the experimenter. They were perfectly happy with lettuce before, but the presentation of the grape creates new expectations and when those expectations aren't met, frustration and anger invariably results.
The moral here is that reducing the level of reinforcement is a very punishing thing for your players and can act as an impetus for them to quit the game. It needs to be done carefully and gradually, or there may be an undesirable backlash. This applies even to temporary reductions, such as when killing orcs stops producing points but the player has not yet discovered that trolls can be killed instead. Sudden loss of reward is very aversive and should be avoided when possible.
A final special case that bears mentioning is what is called "avoidance," contingencies where the participants work to keep things from happening. A simple laboratory example involves a rat in a cage with a small lever. Every so often, a small shock (on the order of a static discharge from a walking across a carpet) is given through the metal floor of the cage. However, if the rat presses the lever, the shock won't happen for at least 30 seconds. The rat quickly learns to press the lever at a slow, steady rate and thus prevent the shock from occurring.
The best game example of this I know of is in Ultima Online, where players who own castles or houses are required to visit them regularly or they'll start to decay. As in the laboratory example above, you have participants who are working to keep things from happening, to maintain the status quo. This is a relatively cheap strategy from the point of view of game developers, since they don't have to keep providing the player with new toys or rewards.
Contingencies have been a major tool in psychology for more than 50 years, so there are a wide variety of special cases and unusual schedules. These three are just a sample of some special cases that are particularly applicable to game developers.
To help drive home the ideas I've discussed, here are some simple formulas of what contingencies to use to achieve specific results. These are not the only ways to solve these problems, but they are simple, reliable, and very effective.
How to make players play hard. Translated into the language we've been using, how do we make players maintain a high, consistent rate of activity? Looking at our four basic schedules, the answer is a variable ratio schedule, one where each response has a chance of producing a reward. Activity level is a function of how soon the participant expects a reward to occur. The more certain they are that something good or interesting will happen soon, the harder they'll play. When the player knows the reward is a long way off, such as when the player has just leveled and needs thousands of points before they can do it again, motivation is low and so is player activity.
How to make players play forever. The short answer is to make sure that there is always, always a reason for the player to be playing. The variable schedules I discussed produce a constant probability of reward, and thus the player always has a reason to do the next thing. What a game designer also wants from players is a lot of "behavioral momentum," a tendency to keep doing what they're doing even during the parts where there isn't an immediate reward. One schedule that produces a lot of momentum is the avoidance schedule, where the players work to prevent bad things from happening. Even when there's nothing going on, the player can achieve something positive by postponing a negative consequence.
Activity level is a function of how soon the participant expects a reward to occur. The more certain they are that something good or interesting will happen soon, the harder they'll play.
How to make players quit. In other words, under what circumstances do players stop playing, and how can you avoid them? I've discussed two main conditions under which players will stop playing. The first is pausing, where their motivation to do the next thing is low. Motivation is relative: the desire to play your game is always being measured against other activities. While they may have a high overall motivation to play your game, during play they're comparing their motivation to do the very next thing in the game to all the other next things they could be doing. If they've just gone up a level and know that they have an hour of play before anything interesting happens, their motivation will be low relative to all the other activities they could be doing.
One way around this problem is to have multiple activities possible at any given time. This means that even if killing monsters becomes unrewarding, there are other activities within the game that can take up the slack. If monsters are unprofitable, exploration may be better. The player could take some time to improve their equipment or to practice a new tactic. Note that this is the same phenomenon that led to quitting before, a drop in motivation in the main activity raising the motivation of lesser activities. In this case, the lesser activities are also part of the game, redirecting their attention within the game and maintaining a high level of play.
The other situation that can lead to quitting is the sharp drop in rate of reward which I discussed in the chimpanzee example. Just like motivation, reward is relative. The value of the current reward is compared to the value of the previous rewards. If the current reward is 10 times the last one, it will have a big impact on the participant. If the current reward is weaker than experience has led them to believe, the player will experience frustration and anger. Violation of expectations is perceived as an aggressive act, an unfair decision by the game's creators. While the game can get more difficult over time, it's best to avoid sharp changes in the rate of reward. This is particularly applicable to puzzle games, where the player may have to spend hours on the same problem before moving on to the next. If the current problem is sharply more difficult than previous puzzles, the player may simply walk away.
The application of general rules to a specific case is always tricky, especially in situations where there is more than one type of contingency operating. Most experiments in behavioral psychology are designed to illuminate a single phenomenon, like an X-ray revealing the bones of an arm. The skin, muscles, and so on aren't shown, so the resulting picture is incomplete. But even with just the bones, we can make a good guess about how the arm works, its limitations and flexibilities. The behavioral principles discussed here should be understood to have similar benefits and limitations. There are numerous other things that influence players, but the basic patterns of consequences and rewards form the framework which enable all the rest. By understanding the fundamental patterns that underlie how players respond to what we ask of them, we can design games to bring out the kind of player we want.