Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Gamasutra: The Art & Business of Making Gamesspacer
The Technology of F.E.A.R. 2: An Interview on Engine and AI Development
View All     RSS
May 14, 2021
arrowPress Releases
May 14, 2021
Games Press
View All     RSS

If you enjoy reading this site, you might also want to check out these UBM Tech sites:


The Technology of F.E.A.R. 2: An Interview on Engine and AI Development

December 19, 2008 Article Start Previous Page 6 of 6

Can you tell me what the "goal-oriented action planning system" is?

MR: Right, G.O.A.P.S.! That's the way that the AI decides what behaviors to use. A standard -- one of the more standard methodologies for choosing behaviors in games is what's called a "state machine"; basically it's just a linear chain of actions that the AI designer has said, "You will do these things in this situation," and he goes ahead and does those things. It doesn't necessarily allow the AI to be very flexible.

So we have this goal-oriented system that involves two aspects: There is a list of goals that the AI wants to accomplish -- a typical goal would be "kill enemy"; another one would be, "get to cover"; a whole bunch of those things. The goals don't actually do anything aside from make the AI decide that he wants to do these things. There is a flat list of actions that the AI will choose from, and they will solve for these actions backwards.

For instance, let's say that the AI wanted to kill the enemy. That would mean that there are a whole bunch of actions that satisfy the requirement for there being a dead enemy; let's say, "Attack with ranged weapon", right? He has a ranged weapon. That's a pretty easy chain right there, like, "I want to kill somebody; this action kills somebody; I'll go ahead and do this action."

Where the power comes from is the fact that those actions themselves can have conditions that they need to have met. So, "attack with ranged weapon" may have conditions that say, "I have to have a weapon, and I have to have it loaded. Go find me more actions that satisfy those requirements." The AI just decided at this point that he's going to attack with a ranged weapon; he now has to figure out how he can get a ranged weapon, and how he can get it loaded. So, at that point, he may find another action, which is "go to this weapon", and then he may find another action which is "reload your weapon".

So, that whole chain that I just described to you, of him doing three things in a row -- which is going to pick up a weapon, loading a weapon, and then going to attack the player -- that was not a directed thing that the level designer, nor that the AI engineer had to program; it was just the fact that we have these aggregate actions that the planner can pick from at will. Does that make sense?

Yeah, it does, totally. How complicated can those actions get? I mean, obviously, some of it's limited by the nature of the game -- obviously, guns, grenades, cover.

MR: Yeah. The actions themselves, the individual actions that the AI can take, we try to keep them pretty small and pretty atomic, this way they can be used by other goals, let's say. I mean, the "go-to node" action can be used by a lot of different goals; it satisfies a lot of different things. And you're talking about how complex an actual chain can be?

Obviously, it makes a lot more sense for, as you said, the individual actions to be as granular as possible, but how complicated can these overarching goals get?

MR: More often than not, they are not complicated. That's the baseline. I mean, more often than not, he has a weapon, so all he needs to do is he needs to get to cover, and he needs to shoot at you from cover.

It's the unique situations that we have that have more complex chains -- like, you know, we didn't have to code anything in order for him to pick up a weapon, so if the AI happens, let's say, to throw an incendiary grenade at the player: As part of the "I'm on fire" behavior, he drops his weapon. We didn't have to go in there and code anything for him to go pick up his weapon; he just knows to go pick up his weapon because he wants to kill the enemy, and that's something in that plan.

There are more complicated behaviors, from an architectural standpoint, that don't necessarily seem complicated from the player's point of view. So we may have a complicated chain of like four or five different actions that happen in a row, but from the player's point of view, it's really just him displacing to another cover node, patting down an ally who had been on fire, or reacting to a shock grenade. They can get complicated behind the player's back, but it doesn't necessarily look that complicated to them.

That's the best outcome, actually, to a player, isn't it? That's kind of the weird thing about AI. You're probably doing your best job when the player doesn't notice it.

MR: Oh yeah. I mean, if the player notices the AI, there's two cases: Either he's done something great, or he ran against the wall for the last three seconds. And, generally, good AI is not noticed; great AI is noticed, but that's far more rare.

I think we've made some substantial improvements from F.E.A.R. 1 to F.E.A.R. 2 -- that does not necessarily mean that the AI is any more difficult to kill, it just means that the environment is richer, and the player is more engaged in the combat. Because, I feel, we've made the AI seem a little bit more realistic. He's not more difficult, he's just more realistic.

You do, to an extent, have some companion-type AI characters who fight alongside you in games, and that's also been a little bit touchy in general. There are a lot of complicated issues there, whether it's how effective they are, how effective they aren't -- intentionally, or accidentally.

MR: Yeah. I mean, one of the main problems is that it's difficult for the AI system to understand the verbs of the player, or player intent, you know? There are some games -- I feel like Rainbow Six does it particularly well, and we do too -- if you narrow the focus of the engagement, so that pretty much the only thing you can do is stand on one side of the environment and fight the AI on the other side... then it becomes very successful, because the AI really knows what the player is trying to do: he's trying to kill the bad guys.

Halo does not have this problem, but one example of a problem that other games have is, when an AI tries to jump into a Warthog, it's simple stuff like, "Does the player want to jump into the driver's seat? Or does he want me to jump out of the driver's seat so that I can jump on the turret?" You know, it's just this complicated thing where the AI is trying to magically figure out what the player is trying to do.

Multiplayer players have problems with that, too! (laughs)

MR: Yeah, exactly. It's a problem overall. Hopefully it's mitigated with multiplayer by voice chat -- and perhaps that's something that we'll see. Like I said, I think Rainbow Six and those do it kind-of well, and I think that's because the actions that the companion AI does is directed by the player. Like, the player explicitly tells the AI: "Go break down this door!" It's not up for the AI to decide whether or not that happens. So, if there's some direction that the player can give the AI, I think that the problems get a lot smaller.

Article Start Previous Page 6 of 6

Related Jobs

Brightrock Games
Brightrock Games — Brighton, England, United Kingdom

Unity3D Programmer (Remote)
Square Enix Co., Ltd.
Square Enix Co., Ltd. — Tokyo, Japan

Experienced Game Developer
Yacht Club Games
Yacht Club Games — Los Angeles, California, United States

Mid-Senior Gameplay Programmer
Grove Street Games
Grove Street Games — Gainesville, Florida, United States

Systems Engineer

Loading Comments

loader image