In a response to Schell's talk, game designer David Sirlin urges you to "Brush your teeth because it fights tooth decay, not because you get points for it."
In this entreaty, Sirlin makes a distinction between outcome and motivation. Sure, brushing one's teeth might prevent tooth decay, but brushing one's teeth for points subordinates tooth decay so much as to make it invisible.
I'll put it more strongly: when people act because incentives compel them toward particular choices, they cannot be said to be making choices at all.
In my book Persuasive Games, I make this same objection to the concept of "persuasive technology," a general approach to using computing to change people's actions advanced by Stanford researcher BJ Fogg and others.
A typical example of such an approach might deploy disincentives instead of incentives. The checkout system at Amazon.com and other web retailers tunnels a buyer from product to purchase by removing all links from the page. A hidden camera system captures images of drivers who exceed the speed limit, and a computer automatically issues a fine.
In such cases, the buyer has not been convinced that a product or seller is desirable, nor has the driver been persuaded that speeding on a particular route is dangerous and should be avoided for reasons of public safety.
To be persuaded, agents must have had the opportunity to deliberate about an action or belief that they have chosen to perform or adopt. In the absence of such deliberation, outcome alone is not sufficient to account for peoples' beliefs or motivations.
But who cares about deliberation if we get the results we want? If achievement-like structures can get kids to brush their teeth or adults to exercise more, why does one's original motivation matter?
Because to thrive, culture requires deliberation and rationale in addition to convention. When we think about what to do in a given situation, we may fall back on actions which come easily or have incentives attached to them. But when we consider which situations themselves are more or less important, we must make appeals to a higher order.
Otherwise, we have no basis upon which to judge virtue in the first place. Otherwise, one code of conduct is as good as another, and the best codes become the ones with the most appealing incentives. After all, the very question of what results we ought to strive for is open to debate.
There's a concept in ethics known as moral luck, most clearly described by Bernard Williams and Thomas Nagel in the late 1970s.
Here's the classic example: two drivers make their way down two identical roads at identical times. In both cases, the drivers look down to change the radio station or answer a cell phone, and in that moment of distraction each runs a red light.
In the first driver's case, an old woman had just stepped off the curb to cross the street, and despite the fact that he tries to avoid her, the driver can't stop in time. His car strikes her down and kills her. In the second driver's case, there is no old woman, and therefore no consequence other than, perhaps, a traffic ticket.
Williams points out that we tend to correlate the action with responsibility in moral judgements. Thus, we would likely judge the first driver to be more morally guilty than the second driver.
But there's a problem: the difference between the drivers' moral states actually has nothing to do with choices under their control. It is entirely a matter of luck. In one case, an old woman happened to be crossing the street, in the other she didn't.
Nagel calls the above kind of situation resultant moral luck. In such cases, luck affects the consequences of actions, making it difficult to judge them as worthy of praise or reproach.