Deliberation
In a response to Schell's talk, game designer David Sirlin urges you to "Brush your teeth because it fights tooth decay, not because you get points for it."
In this entreaty, Sirlin makes a distinction between outcome and motivation. Sure, brushing one's teeth might prevent tooth decay, but brushing one's teeth for points subordinates tooth decay so much as to make it invisible.
I'll put it more strongly: when people act because incentives compel them toward particular choices, they cannot be said to be making choices at all.
In my book Persuasive Games, I make this same objection to the concept of "persuasive technology," a general approach to using computing to change people's actions advanced by Stanford researcher BJ Fogg and others.
A typical example of such an approach might deploy disincentives instead of incentives. The checkout system at Amazon.com and other web retailers tunnels a buyer from product to purchase by removing all links from the page. A hidden camera system captures images of drivers who exceed the speed limit, and a computer automatically issues a fine.
In such cases, the buyer has not been convinced that a product or seller is desirable, nor has the driver been persuaded that speeding on a particular route is dangerous and should be avoided for reasons of public safety.
To be persuaded, agents must have had the opportunity to deliberate about an action or belief that they have chosen to perform or adopt. In the absence of such deliberation, outcome alone is not sufficient to account for peoples' beliefs or motivations.
But who cares about deliberation if we get the results we want? If achievement-like structures can get kids to brush their teeth or adults to exercise more, why does one's original motivation matter?
Because to thrive, culture requires deliberation and rationale in addition to convention. When we think about what to do in a given situation, we may fall back on actions which come easily or have incentives attached to them. But when we consider which situations themselves are more or less important, we must make appeals to a higher order.
Otherwise, we have no basis upon which to judge virtue in the first place. Otherwise, one code of conduct is as good as another, and the best codes become the ones with the most appealing incentives. After all, the very question of what results we ought to strive for is open to debate.
Moral Luck
There's a concept in ethics known as moral luck, most clearly described by Bernard Williams and Thomas Nagel in the late 1970s.
Here's the classic example: two drivers make their way down two identical roads at identical times. In both cases, the drivers look down to change the radio station or answer a cell phone, and in that moment of distraction each runs a red light.
In the first driver's case, an old woman had just stepped off the curb to cross the street, and despite the fact that he tries to avoid her, the driver can't stop in time. His car strikes her down and kills her. In the second driver's case, there is no old woman, and therefore no consequence other than, perhaps, a traffic ticket.
Williams points out that we tend to correlate the action with responsibility in moral judgements. Thus, we would likely judge the first driver to be more morally guilty than the second driver.
But there's a problem: the difference between the drivers' moral states actually has nothing to do with choices under their control. It is entirely a matter of luck. In one case, an old woman happened to be crossing the street, in the other she didn't.
Nagel calls the above kind of situation resultant moral luck. In such cases, luck affects the consequences of actions, making it difficult to judge them as worthy of praise or reproach.
|
If the simple novelty of some of the games proposed in Schell's talk wears off, and we begin to look for 'the next thing', as we often do as a culture; the original point being debated would become moot. I was certainly imagining the backlash from cool teenagers while I listened to the talk. ("My parents are slaves to the game. I'm going to ditch my rewards account.") For me, the only reasonably reliable prediction we can make about cultural behavior is this: if we like it at the time, we'll keep it and if we don't we won't.
I have enjoyed following this high-minded discussion about the power and future of games, I just wish it was more grounded, even if that means it becomes less provocative.
For a fairly definitive counterposition on rewards enhancing behavior, take a look at the 1973 experiment Jesper Juul recently discussed (http://www.jesperjuul.net/ludologist/?p=925), which shows that external rewards can be demotivating.
I agree this is mushy stuff.
While I have not read the text of the study Juul discusses, I have a feeling the experiment may actually prove the diminishing returns of static (or consistent) rewarding behavior; something I hint at in my previous comment. I am not a psychologist, but I am a parent. If my daughter receives the same reward, praise or attention in the same way, it almost immediately begins to have a different effect. One thing I do know about psychology: it is not math; it is not absolute.
In the games industry, the new flavor of the month enjoys a novelty boost, in sales and in general interest. It's pretty easy for us to see how games can have this reward influence Schell suggests, but I think this is primarily due to the current climate of turnover of the latest hot game. When we talk about a longer-term, pervasive, comprehensive and consistent social rewards game, I can't help but wonder the cultural lifespan of such a mechanism.
At any rate, just ignore me if the subject is sparking discussion. I will listen regardless, who am I kidding?
As for the new flavor of the month, you're right that there's always some new trend. But perhaps the one point I really agree with in Schell's talk is that so-called social games took people by surprise. I think that's generally true, so it would do us well to think and look further out than the next quarter.
@Carlo, Even something like that has its flaws because kids are good at finding the exceptions and loopholes in systems -- maybe better than adults in some ways.
It's this quote: "When people act because incentives compel them toward particular choices, they cannot be said to be making choices at all." Perhaps I misunderstand, but what of choosing between competing incentives? What if you have 2 minutes, and you only move fast enough to Brush or Floss? Do you measure the rewards? Do you analyze the health benefits? Do you find a way to make more time in your morning?
This is not at all an empty, barren realm Mr. Schell is describing. Rather, it could be a way to help visualize, prioritize, and track progress on things humans simply don't manage well: long term stuff like health, finances, habits, etc.
I don't know why anything would change when you start calling them achievements and points. I'll remind everyone that most hardcore gamers don't FINISH all of the games they start, let alone come anywhere near getting all of the achievements.
What I think would happen in Schell's future is that people would be become so accustomed to getting points that they would devalue them, except for the small subset of points that actually matter to them. Identity would be in part defined by which points systems you choose to pursue. And I will put money down that the tooth brushing progress meter will not be high on the value chain of most 8-year-olds.
Bogost and commentors have basically suggested two questions: 1) Is simply showing stats morally superior to stats + good/bad judgment? 2) and which version is more effective at changing behavior? To this, I would suggest that the government has no problem saying to hell with "morality." It provides tax breaks and incentives for companies to act "morally" to reduce green house gases. It imprisons and punishes those who transgress. Morality is good, but incentives work better.
I would suggest some other ideas: What if these raw statistics were simply made public, without any external judge? Would people say, "Yikes! I'm using twice as much energy as my neighbor! Maybe I should change my behavior so I don't look bad." Is that morality or external judgment and can the two really be separated? (Or imagine a forced gamertag, visible to all which says, "Little Sister Killer." Would that change people's gaming behavior?)
Finally, Schell fails to acknowledge a simple psychological truth: Providing too abundant a reward schedule leads to fast extinction of desired behavior. To be realistic, Schell's proposal needs a more variable reinforcement schedule. You can't get points every time your brush your teeth -- you can only get them some of the time.
I can totally see I-Robot style situations where a person who has become accustomed to earning points, for example, abuses the system because they just want the points. At what point does a parent tie down their children and brush their teeth for them to earn some more points and is this parental abuse?
No one said that education is easy. As a parent, and this same argument applies more generally, you have to concede that the best you can do is educate and encourage and if the child (or member of society per se) does not want to do the behaviour you wish, then you have to let it be (or alternatively give the punishment if applicable/let the consequences occur (eg your child is the stinky child)). To do otherwise is removing the humanity from that person.
You can probably tell I'm more on the radical end of the political spectrum then most but these are just my 2 cents on the topic.
That's a good one, isn't it? :) I'm making a distinction between the outcome and the choice.
@Nick
Yeah, that's part of the argument in the research Jesper posted (linked above). But it's a different argument than the one I'm making here, which is not about efficacy.
About statistics: making data visible doesn't necessarily imply any sort of incentive structure whatsoever. Information can provide evidence that motivates decision. But philosophically, we must still distinguish between a society that behaves a particular way because it believes such behavior to be virtuous, and one that calls itself virtuous because it appears to behave in a particular way.
Imagine that it's election season. The DNC and the GOP are both incentivizing volunteer work (working at call centers, door to door canvassing, etc. Let's ignore that this is probably against a rule I'm unfamiliar with and if it hangs you up that much, tell me and I'll try to come up with an alternative example, but I think that this is the most persuasive.) It strikes me that what you're saying is that people will volunteer for whichever pays more points. They won't be volunteering due to their political beliefs, but because they want a higher score.
I can't speak for everyone, but the promise of a bigger payout wouldn't make me volunteer for the other side. In a world where everything is scored, I suspect personal preference will still come into play. In fact, that a person has chosen to gain points by volunteering - instead of watching tv, going to the gym, or playing flash games - already says something about their proclivities.
Ian, I understand your trepidation here, and I do think that Juul's argument is compelling. But I'm curious if it holds up inside of a system change. I've always played video games, and I don't /think/ that I play them worse now that I'm awarded trophies or achievement points. But maybe that's because I'm not a child. And even as a child, I think that I responded very well to some incentive programs (the Book It! reading program specifically sticks out in my mind.)
I think that's my driving force here - why are we saying that the future will be just incentivized activity or must be the opposite of that. The factors at play already suggest that we'll see some of real life "Lockerz"ized. Perhaps the lesson here isn't that we have to throw away the schell game all together, only do our best to limit its pervasiveness.
(I also have some bizarre determinist notions that make me think that even without explicit point schemes, every action we do is incentivized by innumerable factors, and that's why we do them. But that's neither here nor there.)
Let’s take brushing one’s teeth as an example. Who gets to say that brushing your teeth in the morning is worth 5 points in Game A? Presumably it’s not you, or you could pay yourself anything you wanted. So you must be playing someone else’s game... but whose?
Is it your family’s game? Does Mom get to decide whether to incent toothbrushing? What happens to parents or to people who don’t have an immediate family to whom toothbrushing data can be sent -- are they not permitted to play the game?
Is it business’s game? Should the toothbrush or toothpaste manufacturer get to set the rules of the game? What if different companies create different games -- what’s to stop players (i.e., people who brush their teeth) from shopping around to see whose game gives the most points?
Is it your government’s game? Do you want to pay for the bureaucratic organization that will need to hire civil servants whose only function is to monitor your toothbrushing data, and to establish panels that arbitrarily set how many points you receive for brushing your teeth? When they can collect toothbrushing data, how do you argue against them also collecting other data? Can you choose not to play their game?
In other words -- and as has always been the case -- the real question is what amount of choice we will have when our real life actions are rewarded with XP and achievements.
Will we have a choice in whose games we play?
And will we be able to choose not to play at all?
Beautiful final section. Great article Ian.
- Yes, games are a process. In the case of games involving people, we can create reproducible processes that have experimentally validated outcomes. This is not new. Governments, religions and social organizations have been doing very similar things for many thousands of years.
- Electronic games automate traditionally human processes in ways that are scalable, maintainable and constantly validated. This is new, at least in the degree to which they add efficiency.
- Electronic systems also provide feedback on a much finer granularity than is typically cost effective in other rule-based social systems. This allows games to reach in portions of our life that have not historically seen enforceable governance.
- Philosophical and ethical arguments will at some point need to face the pragmatic reality that these systems do work. A measurable percentage of players will behave as desired by the designers of the systems. This has clear economic value. It has the potential for generating social value as well. And like a shell game, it can be used for ill. Games that reach into our everyday lives are a tool, not something inherently blessed or damned.
As for the article, the following thoughts came to mind:
- For many games, the player never judges if the final outcome is useful. Instead they judge if the immediate next choices are worth spending energy on. Players wear blinders. Only the operator sees system as a whole. In this sense, the operator with their access to the behavioral records of thousands of players, is always behaving in a highly manipulative manner. It is what we do...all good game designers play a rigged shell game with their customers. Otherwise, we cannot effectively train players to slowly gain the skills they need to comply appropriately with our end goals.
- It is a common mistake that points are seen as meaningless baubles that reward players to earn more points. Our players are not that gullible, at least not on a subconscious level. Points, in a well designed game, are merely a currency that players exchange for something that matters, be it status, the love of your mother, etc. To say that game that use points to motivate players in the real world will be shallow is merely claiming they are badly designed games. However, being able to point out one badly designed game does not eliminate the possibility that you can make well designed real-world games that are effective, well balanced and appropriately use both intrinsic and extrinsic motivation.
take care,
Danc.
In the moral luck example I think it is important in whose favor you interprete probability. From the point-of-view of the old woman, was it just bad luck to cross the street at that time and place? Should she not expect that drivers will watch the street rather than searching for the radio button or receiving a phone call? Resultant moral luck sounds a bit like "We can be ignorant of others, and if we are lucky noone dies." In my opinion it looks like it cancels out the notion of responsibility while it attempts to fight a narrow-minded conception of morality.
To continue with the Farmville example: Is it OK when I exploit friendship for positive gain in a game? Well, it looks OK, because if I choose to play a game and if this kind of relationship is part of the game contract, why not? But what when it is the game designer who deliberately uses invitation and cooperation mechanisms to exploit "my" network of friends to broaden "his" customer base and sell us alltogether to marketing and advertising companies? Can or should we draw a distinction between the ethics of the game itself and the ethics of the business model that sets certain design goals?
And what if other game designers think that herein lies an opportunity to make a profit and wonder whether it might be a good idea that we get points for brushing teeth? And for washing our hands? Or getting points deduced if we have sex before marriage? Will it be bad moral luck when a truck full of moral games hits me because its drivers are distracted since they're receiving calls from venture capital providers? ;)
Luck had absolutely nothing to do with either case. Don't think it does and start mucking with radio dials when you should be FOCUSED ON THE ROAD, buddy. Of all laws, coming to a stop at a red light is the easiest to recognize as a good one.
Otherwise, interesting article!