|
In a recent opinion article by James Portnow entitled The Problem Of Choice, the idea was posited that there are two types of decisions that a player can be faced with in a game: "problems" and"choices". The former is something that involves a "right answer" such as a mathematically optimal solution. Therefore, theoretically it can be solved. We are all familiar with such challenges ingames -- especially when designers make them all too transparent. The other type of decision is the "choice". These are a little more amorphous in that there is no "right answer". Games such as Bioshock (i.e. the Little Sisters) have these elements but others such as Black & White and the two Fable games are rife with them. In fact, the entire game mechanic is built upon the idea of "make a choice and change the whole experience." While I agree with the excellent points that James made, I believe that this same mentality can be extended to the realm of AI as well. In fact, I made this point in my lecture, Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters at the AI Summit at GDC a few weeks ago. Specifically, I suggested that the incorporation of differences between characters can enable game design choices for us as developers which, in turn, enables gameplay choices for our audience. However, it is not simply the incorporation of personality, mood, and emotion that does this. It is often even simpler than that. As programmers, we deal in a world of algorithms. Algorithms are, by definition, a series of steps designed to solve a particular problem. Even the ubiquitous yet humble A* pathfinding algorithm is sold as guaranteeing to "return the shortest path to a goal if a path exists." The emphasis is mine. It returns the shortest path -- the best decision. Now that we are using A* for other uses such as sifting through planning algorithms todecide on non-path-related actions, we are subscribing to the same approach. What is the best action I can take at this time? Unfortunately, that leads our AI agents along the same path as the player... "how can I solve this game?" The simple fact that our agents are looking for the one solution necessarily limits the variety and depth that they are capable of exhibiting. The irony involved here is that, in designing things this way, we cause our agents to approach something that should be a choice (as defined by Portnow) and turn it into a problem (i.e. something that can be solved). Whether there is any "best" decision or not, our agents believe that there is... "belief" in this case coming in the form of whatever decision algorithm we happened to design into their little brains. The solution to this is not necessarily technical. It is more of a willingness by designers and AI programmers to allow our agents to either a) not make the "best" decision all the time, or b) include decisions in the design to which there is no"best" solution at all. Unfortunately, we have established a sort of industry meme that "we can't allow our agents to do something that is not entirely predictable". We are afraid of losing control. Here's a startling tip, however... if we can predict what our agents are going to do, so can our players! And I nominate predictability as one of worst air leaks in the tires of replayability. One of the quotes that I used in that lecture and in my book on behavioral AI is from Sid Meier who suggested that a game is "a series ofinteresting choices". It is a natural corollary that in order for the player to make interesting choices, he needs interesting options (not math problems). One of the ways that we can present those interesting options is to allow our agents to make interesting choices (not solve problems) as well. Dave Mark Intrinsic Algorithm
|
I think the difference between imperfection and unpredictability needs to be emphasized. Predictably imperfect agents are no more useful than predictably perfect ones. An example of this would be in sports games, where the player can discover and exploit a "money play" or other predictable weakness in the computer-controlled opponent's behavior.
QA is frightened of this for the same reason. How can they tell if something is "right or wrong"? The problem here is not that they can't judge the correct answer but rather that they are asking the wrong question. Instead of asking "is this the right action?" for an AI to take, they need to be asking "are any of these actions blatantly wrong?"
If I observe a crowd of people, a group of soldiers, or a team of sports players, I can't tell you with certainty that any given action is the "right" one to take at any given moment. If I could, the subtlety of human behavior would be lost. However, I can tell you if one of the actions looks wrong. By limiting our QA test to that subjective space, we open up the massive space of potentially reasonable actions.
To be honest, AI designers and programmers can add variety without a lot of difficulty... but designers (and QA) won't let us because of that fear. We need to break out of that method of thinking lest we forever stagnate with shallow, predictable agents.
(BTW, good point on Civ being a big "spreadsheet game." However, if you have played Civ 4, there is enough chaos theory in evidence there that the game is not at all predictable. *respectful nod to Soren Johnson*)
It's also not a clear-cut division. _Deus Ex_ gave you a problem like "get into that building", but offered you choice in how to do so. But I digress.
I do agree with your post, Dave. Interesting AI means allowing sub-optimal decisions and/or never-optimal choices. I am writing an interactive fiction that exploits this. It's slow going.
(BTW, A* only returns the best choice in the average case. There's no such thing as an always-optimal pathfinding algorithm. But I think that fact strengthens, not weakens, your post.)
http://www.gamasutra.com/blogs/RonNewcomb/293/
Very much like in some mind games with other people, an interesting experience would be to figure out what it's a particular NPC trying to accomplish, and given the context, the actions the player can see are different provided the constrains for the NPC's.
It's also a bit of design and QA friendly, since the different goals and/or constrains can be separated on different playing sessions in a experimental setup in order to study possible emergent gameplay, but it takes a team effort to understand and effectively use something like this.
"I think that a good way to see AI characters behaviour it's from an economist point of view: everybody it's optimizing something, but they have different goals, and also different constrains."
I actually use this model quite a bit in my book. It's primarily a utility-driven decision theory model. Everything in the world (including time) has a utility that can be measured. Different people's utility may differ for the same item or goal. It is actually quite simple to build in those differences into agents.