Gamasutra: The Art & Business of Making Gamesspacer
Intelligent Mistakes: How to Incorporate Stupidity Into Your AI Code

Printer-Friendly VersionPrinter-Friendly Version
View All     RSS
April 16, 2014
arrowPress Releases
April 16, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM TechWeb sites:


 
Intelligent Mistakes: How to Incorporate Stupidity Into Your AI Code

March 18, 2009 Article Start Page 1 of 3 Next
 

[Neversoft co-founder West presents a thought-provoking look at improving the believability of AI opponents in games by upping their use of "intelligent mistakes", in a piece originally written for Game Developer magazine.]

Twenty years ago, I was working on my first commercial game: Steve Davis World Snooker, one of the first snooker/pool games to have an AI opponent. The AI I created was very simple. The computer just picked the highest value ball that could be potted, and then potted it.

Since it knew the precise positions of all the balls, it was very easy for it to pot the ball every time. This was fine for the highest level of difficulty, but for easy mode I simply gave the AI a random angular deviation to the shot.

Toward the end of the project, we got some feedback from the client that the AI was "too good." I was puzzled by this and assumed the person wanted the expert mode to be slightly less accurate. So I changed that. But then I heard complaints about the decreased accuracy, and again that the AI was still too good.

Eventually the clients paid a visit to our offices and tried to demonstrate in person what they meant. It gradually came out that they thought the problem was actually with the "easy" mode.

They liked that the computer missed a lot of shots, but they thought that the positional play was too good. The computer always seemed to be leaving the white ball in a convenient position after its shot, either playing for safety or lining up another ball. They wanted that changed.

The problem was, there was no positional play! The eventual position of the white ball was actually completely random. The AI only calculated where the cue ball should hit the object ball in order to make that object ball go into a pocket.

It then blindly shot the cue ball toward that point with a speed proportional to the distance needed to travel, scaled by the angle, plus some fudge factor. Where the white ball went afterward was never calculated, and it quite often ended up in a pocket.

So why was it a problem? Why did they think the AI was "too good" when it was actually random?

Humans have a tendency to anthropomorphize AI opponents. We think the computer is going through a thought process just like a human would do in a similar situation.

When we see the ball end up in an advantageous position, we think the computer must have intended that to happen.

The effect is magnified here by the computer's ability to pot a ball from any position, so for the computer, all positions are equally advantageous.

Hence, it can pot ball after ball, without having to worry about positional play. Because sinking a ball on every single shot would be impossible for a human, the player assumes that the computer is using positional play.

Design or Code?

Is this a design problem or a code problem? To a certain extent it depends on the type of game, and to what extent the AI-controlled opponents are intended to directly represent a human in the same situation as the player.

In a head-to-head game such as pool, chess, or poker, the AI decisions are very much determined at a pure code level. In a one-versus-many game, such as an FPS, there is some expectation that your opponents are generally weaker than you are.

After all, you are generally placed in a situation of being one person against countless hordes of bad guys. Other game genres, particularly racing games, pit you against a field of equal opponents. Here the expectation of realistic AI is somewhere between that of chess and the FPS examples.

The more the computer AI has to mimic the idiosyncrasies of a human player, the more the task falls to the programmer. The vast majority of the AI work in a chess game is handled by programmers. Game designers would focus more on the presentation.

In an FPS, the underlying code is generally vastly simpler than chess AI. There is path finding, some state transitions, some goals, and some basic behaviors.

The majority of the behavioral content is supplied via the game designers, generally with some form of scripting. The designers will also be responsible for coding in actions, goals, and responses that emulate the idiosyncrasies of human behavior.


Article Start Page 1 of 3 Next

Related Jobs

Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States
[04.16.14]

Production Coordinator (temporary) - Treyarch
Vicarious Visions / Activision
Vicarious Visions / Activision — Albany, New York, United States
[04.16.14]

Senior Software Engineer-Vicarious Visions
Telltale Games
Telltale Games — San Rafael, California, United States
[04.15.14]

Senior Platform Engineer (Xbox One & PS4) @ Telltale Games
Nintendo of America Inc.
Nintendo of America Inc. — Redmond, Washington, United States
[04.15.14]

Associate Software Support Engineer






Comments


Chris Crowell
profile image
Mick, great article. Allow me to add my own two cents.

When we were working on Rumble Racing, the goal was to make the AI opponents feel like people. We had a matrix of possibility for introduced errors, where at each pathing junction the AI had a choice to take a fast line or slow line. This matched up how people drove the tracks, where small errors in braking and taking the corners made the cars seem to go slower because of poor driving, not simply going slower.

Greg Wilcox
profile image
Excellent piece! I'm all for more AI with human quirks - particularly in certain action or shooter games where AI seems almost perfect. Now, I'm not asking for Star Wars stormtrooper-style inaccuracy, but it would be nice to see a bit more than the now standard, duck for cover and toss a grenade



Hey Chris - Rumble Racing was (and is, as I still play it today) one of those great games where one really got to see the AI do some things you'd expect a live opponent to pull off, so it was always a blast to race the same tracks multiple times and almost never see the same result.



g.

Tom Benda
profile image
I've experimented with a sort of artificial "anxiety" system in an avoidance simulation, where many near misses will do "damage" to their confidence score, and their confidence score affects ranking of next moves in a search-based approach. The more unnerved the AI, the more likely they are to refuse to take a move which will place them in danger. That refusal results in inaction which is often lethal.



This gives moments of hesitation and panic which look extremely realistic (at least in something as simple as driving or fleeing). I've tried to think of a good way to cause the AI to flipflop between decisions only when they are in a panic, however that seems to require breaking good decision making skills (requiring a set amount of time before they can change action).



It's nice to see an article on this subject. A good deal of thought has to go into creating lifelike artificial behavior, and I'm glad to see people in the industry upping their game.

James Cooley
profile image
One of the things I would like to see programmed in are more AI opponents with a "survival wish". Nobody wants to get shot. Only in games and bad movies do opponents keep rushing against impossible odds or when severely wounded. Give me situations where an opponent throws down their weapons and tries to retreat. Or, when the first bullet flies by, ducks for cover. That doesn't just keep walking patrol while stepping over the dead body of their comrade I sniped seconds before. Balancing "fear of death" in with the need to have enemies that are fun to fight may not be easy, but I would love to see more of it.

[User Banned]
profile image
This user violated Gamasutra’s Comment Guidelines and has been banned.

Savas Alparslan
profile image
I am thinking of a different stupidity scheme. In this scheme, you'll run the perfect algorithm everytime, but, you'll poison variables randomly. I mean, you'll increment/decrement or negate random variables in each turn. You can change number of variables to poison to determine how stupid the AI would be.

Adam Bishop
profile image
I like James' suggestion. It reminds me of the pen and paper RPG Shadowrun, in which all but the most hardened enemies will eventually try to escape from you after reaching a certain damage threshold. I like that idea of certain enemies running away if they're, say, just a beat cop rather than a Navy S.E.A.L. Of course, that could pose problems in a game which is entirely about the player killing enemies, so I think a decision to include fleeing behaviour in enemies would need to involve a bit more creativity about things like level design so the player didn't feel cheated. Some players may feel really powerful because they can make enemies run from them in fear, but other players may feel as though they're not getting the game they thought they were.

Duncan Rabone
profile image
One thing that I would like to see is A.I. designed to act more purely based upon the actions of the player. Frequently I would perform some kind of deliberate action, while the A.I. continues to act the way it was designed but with emphasis on simple variables (difficulty and/or intelligence variables) What I'm after would show more in the earlier mentioned FPS or action set-piece A.I.



eg: I try to rush into the range and view of my opponents from whence they previously had no knowledge of my presence. Current A.I. would allow all notified opponents to 'snap to' my character and begin firing/ducking for cover with their speed and accuracy determined by difficulty. What I want to happen is for the opponents "psychology" to determine their actions, even though they may be 'trained soldiers.' I just initiated a "surprise attack" and expect them to act surprised depending on a combination of how alert the opponent may be and how efficiently I got within range.



It may sound like what I'm asking for is scripted A.I. set-pieces. In one way yes, but this has much larger scope. To me, realistic 'mistake making' A.I. mean an opponent that "feels." Will retreat because he fears, will fight because he's confident. I'm an artist so I don't know how to implement it, but it would be good to see the A.I. literally go "whoops"

Tom Newman
profile image
Great topic of discussion! One of the most frustrating things a player can experience is the feeling that the AI has an unfair advantage. Many games give the player a feeling that the AI knows every inch of the level, and knows where the player is at all times. Other games make the player feel like the AI is just another environmental obstacle, like a rolling barrel (i.e. stupid AI). This will always be a delicate situation to balance, and well balanced AI is the difference between a good game and a great game.



Personally, I like gmaes that use the old D&D system of % to hit; etc; as I always feel like it's an even playing field between the player and the computer. Many games do a great job with AI (like Fallout 3 most recently), but I think some FPS and action games where the AI seems to be on a straight path to the protagonist can benefit from some rpg-style stats under the hood.

Jason Bakker
profile image
Savas: The poisoning variables idea sounds interesting, but it would probably be very difficult to iron out the totally stupid mistakes that are talked about in the article from it.



John Smith: Try explaining to your producer that you need to develop two different physics systems - one for the game to use, and one for the A.I. ;) Seriously though, your example assumes that with the right interfaces, an A.I. "brain" would act similarly to a human brain, which simply isn't true - even if your A.I. was unable to be accurate, it doesn't mean that it would feel like you're playing against anything analogous to a human opponent.



Games are experiences in which you need to create a suspension of belief, and the A.I. is a tool in that - there will never be a fair fight between an A.I. and a human, but with your A.I. making the right mistakes, you can trick your player into thinking that there is. And at the end of the day, that's all that matters! ;)

JM Janzen
profile image
Particularly connected with 'Throwing the Game' on the second page. While I understand that it wasn't the spirit of that paragraph to suggest that aeies throw games like humans throw games, I have noticed a few video games (particularly RTSes) where the computer just seems to /give up/ at a certain point.



C&C 3 is case in point for this.

Steve Breslin
profile image
I completely agree about increasing the intelligence for this purpose; merely adding a random factor or reducing the number of calculations (decreasing intelligence in the programming sense) does not lead to the best gameplay.



You mention the prospect of setting up a situation that the player can exploit. That's definitely a fine solution to the handicapping problem! My interest is in the preconditions for setting up such a situation.



As you show with the poker examples, there's usually a reason that a player makes an error, or a type of error which is more typical. (Folding to a big raise, against the odds. That one is a very good and clear example! Folding when "unclear" about the odds... I think this means that you can teach the machine which hands are complicated for a human to calculate the odds for.)



Let me give another example to sharpen the angle I'm getting at.



Let's say we were writing a chess program aimed at beginners (defining 'beginner' as not absolute novice, but rated under 1400). We'd want the machine to make beginner-like mistakes. The first step in "dumbing-down" the AI would be to figure out what kind of mistakes beginners make, and under what conditions. We'd be doing pretty well if we programmed the machine to threaten checkmate when it can be beaten by mate in three. We'd be doing even better if we figured out which kinds of mistakes are more obvious for players (and avoid those). So, the machine wouldn't fall for a sufficiently "obvious" mate in three.



In other words, we'd want a function which evaluates the "cleverness" of a move; the machine, imitating a beginner, is less likely to notice clever moves. Then, when we're setting up a scenario for the player to exploit, we have a believable reason for making the "error."

Kriss Daniels
profile image
Yup



The AI is not playing the game to win.



The AI is a granny playing with a spoilt brat who throws a tantrum when it loses.



Personally I prefer AI with exploitable weak points, but most gamers are of the spoilt brat variety.

Ian Hardingham
profile image
I love this - I had no idea chess games were making dynamic meta-game puzzles for us now.



UT used to give the bots fewer shards in the Flak Cannon at lower AI levels, and make the rockets slower... pretty crazy.



We talked about this on the games-and-industry podcast Visiting The Village this week (www.visitingthevillage.com).

Michial Green II
profile image
interesting I must say. As a gamer/developer in training I must say that it is quite obvious how most AI make their decisions. I remember when it was very common for Programmers to create AI personalities, AI with a specific bias to a certain strategy or decision.



For example, take the game Monopoly. there would be specific strategies or "Tendencies" that were predetermined by the programmer to have top priority over other decisions regardless of how useful it was.



For instance, Maud will always buy level 4 properties (The green and blue properties) if able, and place houses on them as soon as possible. This would include a bias of being willing to offer VERY large sums of money in order to acquire there properties or giving up any lesser (according to the bias) property. while at the same time its shunning Level one properties entirely; this would include certain reckless actions, such as trading properties that were not level four even if it owned two out of three of a lesser property of the same color.



This sort of AI gave the illusion that this AI personality had a strategy. you would notice how "badly" he wanted boardwalk and you would hold out for a better offer. You got the feeling you were outsmarting him. and if you picked up on the tendencies, you felt much smarter, and rewarded for victory over them.



I think it would be a nice little tweak to shooter games if personalities were used. Players would know that there were very tactful enemies, as well as overyly aggressive (Reckless) ones. In this instance players can have waves of stupid enemies, mixed with a few, "Hard to kill" Bruce willis types. I think a dynamic AI such as this will enrich most shooters.

Barton Massey
profile image
In his most excellent book "One Jump Ahead", Jonathan Schaeffer talks about how as the play of his human-machine checkers champion Chinook improved, it got progressively less fun to play against. The essence of adversary search is minimax, and risky positions are tricky for the computer to evaluate, so it is always better to avoid them.



Schaeffer recommends that you tune your player so that it is risk-seeking. This doesn't necessarily make it inherently weaker, but if it gives its opponent an interesting game they wil be happier even if they lose.

Luis Guimaraes
profile image
@Michial Green II. There is a bot for CS where you can configure every bot Entity, and also create new ones, also, in UT3 the AI code is related to the character model.


none
 
Comment: