The Designer's Notebook: Introducing Ken Perlin's Law
June 1, 2006
“So who’s Ken Perlin?” I hear you cry. Ken Perlin is somebody you ought to know about and pay attention to. He’s a professor of computer science at the New York University’s Media Research Lab. He’s also the winner of an Academy Award—yes, a real Oscar—for his work on procedural texturing algorithms. (Beat that, Clooney!) Ken is simultaneously blessed with staggering intelligence and seemingly boundless energy. He works on an incredible range of really cool stuff, from a collaborative integrated development environment intended to help teach programming to schoolgirls, to a machine for projecting 3D images into the air with no screen (like R2-D2’s projection of Princess Leia near the beginning of Star Wars), to a fast but effective facial expression animation toolkit. Best of all, he puts a bunch of this stuff on his web page as Java applets so you can play with it yourself; have a look at http://mrl.nyu.edu/~perlin/.
“OK, so what’s his law?” you ask. That takes a little more time to explain. But I should say that while Ken Perlin came up with the idea, I’m the one who’s calling it a law and naming it after him. He’s too modest to do it for himself, but I think it’s really important and he should get the credit for it.
For a long time now, we game designers have assumed that player freedom is a good thing, especially in the context of fictitious game worlds where the player can move around and explore. This assumption goes all the way back to the original text-based adventure game, Adventure (or Colossal Cave). Adventure was different from other computer games of its day because it didn’t print a list of commands for the player to choose from. Instead, it simply put a prompt on the screen and said, “type anything you want to.” It pretended that you could do anything. Of course, after five minutes of play you realized that this was an illusion; the game didn’t really understand that many commands. But, among those of us who are optimistic about the potential of computer games, it created a fond hope, a utopian dream: Someday we will create a game in which you can do anything! And this dream has been at the back of game designers’ minds from that day to this.
This is partly why the Grand Theft Auto series has been so highly praised. Never before has a game offered the player so much freedom. The game world reacts appropriately to just about anything the player tries to do. If you steal a taxi, you can be a taxi driver and earn money legitimately, taking people around town. If you steal an ambulance, you can earn money by taking people to the hospital. You can listen to different radio stations in the car, play basketball in the right places, and so on. Of course, the range of player actions permitted in Grand Theft Auto is restricted to certain domains, mostly to do with violence and vehicles. You can’t earn any money being a street mime, and you can’t set up and run a homeless shelter. The game world doesn’t include the necessary actions or mechanics to support these activities. Still, the range of things the games will let you do is unprecedented, and it created tremendous excitement among both players and game designers.
So we have a well-established assumption that player freedom is good, but it brings with it a problem.
For a long time now, I’ve been struggling with a conundrum of interactive storytelling that I dubbed “The Problem of Internal Consistency” in a lecture I gave at the Game Developers’ Conference in 1995. I also wrote about it in an earlier Designer’s Notebook column, “Three Problems for Interactive Storytellers,” back in 1999. The essence of the Problem of Internal Consistency is this: how do we balance the player’s desire for freedom with the designer’s desire to tell a consistent, coherent story? What do we do when the player wants to do something that doesn’t work with the plot that we’ve laid out? Refuse him permission to do it, and take away his freedom? Or allow him to do it, and destroy our story? I never came up with a good answer for it.
So last November, I went to a conference called Virtual Storytelling ’05 in Strasbourg, France. It was a small enough conference that every session was plenary—you didn’t have to choose between sessions, so as long as you showed up, you were bound to hear everything. Ken Perlin was one of the speakers, and in the middle of his lecture, he made an almost throwaway remark that really brought me up short. This was what he said, the thing that I think is so important:
Ken Perlin’s Law: The cost of an event in an interactive story should be directly proportional to its improbability.
Now, I’m used to thinking about interactive stories in terms of traditional puzzle-based adventure games, and they don’t usually have an internal economy. They often don’t keep track of any numbers at all. So when I first heard this, I thought, “What’s he talking about? Interactive stories don’t have any notion of costs built into them.” Even in role-playing games, improbable events are just the product of particularly good or bad die-rolls. There’s no cost element associated with them; it’s just luck.
But the more I thought about it, the more sense it made, and the whole concept started to break up the logjam in my head about the Problem of Internal Consistency. What is the unit of cost of an improbable event in a story? Its credibility. That’s what gets spent when something improbable happens. And in fact, every story, interactive or non-interactive, book, movie, television, or computer game, has a credibility budget. The story itself can only tolerate a certain amount of improbability before the credibility budget is exhausted, and the story is ruined. In the case of non-interactive, conventional narrative, the author controls and spends the credibility budget, and when the author blows it, she ruins her story and destroys her reader’s immersion. But in the case of interactive narrative, both the designer and the player spend from the same credibility budget. If the designer blows it, then he ruins the story for the player. But if the player blows it, he ruins the story too. He has done something so improbable that the designer didn’t budget for it.
|Indigo Prophecy - Overdrawn at the Credibility Bank?|
Now, Ken didn’t say that the unit of cost of improbable events in a story is credibility. That’s my own addition to his idea, and if you think it’s nonsense, you should blame me for that, not him. But it makes a lot of sense to me.
Ken went on to give an example of what he meant by the cost of improbable events. He said, suppose you’re playing along in an interactive story set in the modern day, without any magic or strange powers, and you decide that you want to materialize a chicken out of thin air. Ken said, if the game allows this at all, it should be a very, very, very expensive operation. And in my terminology, materializing a chicken completely blows the credibility budget. I think the designer is entitled to decide that you simply can’t materialize chickens in his world, because the credibility budget doesn’t stretch that far.
In papers on interactive narrative you often see grand statements of the form “the designer and the player collaborate to create the storylike experience” without any explanation of what the hell that really means or how it’s supposed to take place, especially given that the designer and the player usually never meet. And I don’t know what the hell it really means either, but I think this business of both the designer and the player making withdrawals from the same credibility budget is central to the idea. It’s where the rubber meets the road on the Problem of Internal Consistency. Essentially, we are entitled to limit the player’s freedom when that freedom would destroy the story.
(Interestingly, in spite of all the freedom that the Grand Theft Auto games offer, you can’t actually ruin the story. It’s compartmented off to prevent you from damaging it. If you try to kill characters or destroy vehicles that the plot needs later on, you just won’t find them. They don’t come into the game world until they’re required.)
I’m not just talking about this stuff in a purely abstract, theoretical sense. I’m talking about design and coding. I think it’s possible to build a quantity, a resource called credibility, into a game, and to track expenditures against it. When the player does outrageously improbable things, credibility is diminished, and perhaps he can’t do any more improbable things for a while until it builds back up again over time. And if the game is using an algorithm to generate story-events automatically, then I think it, too, should be limited by the size of the credibility budget, and not permit improbable events to occur more often than is credible. Naturally, any such system would have to have a concept of a credibility price built into it, and that price would have to be set by the designer. But that’s what we already do in RPGs every time we establish the probability of certain events occurring according to die-rolls. The credibility price of an event will require human judgment to set, but there’s nothing wrong with that; I’m all for humans taking a major role in constructing our stories, even if they are automated and interactive.
Of course, we’ve always been able to limit the player’s freedom, and we always have—though mostly for technical reasons rather than storytelling ones. The issue is really how we justify it when maximum freedom is one of our most deeply-cherished goals. As long as we don’t mind the player ruining the story, it doesn’t much matter; but as designers it’s our job to provide credible stories and freedom at the same time. I think Ken Perlin’s Law gives us the tool we need to balance those conflicting demands. Thanks, Ken!