The Technology of F.E.A.R. 2: An Interview on Engine and AI Development
December 19, 2008 Page 4 of 6
Matthew Rice, senior software engineer, AI
What do you see? What is "an improvement" to AI? What is your expectation of where AI is going, and what is achievable within this generation?
MR: I think you've seen a couple different things that have happened in this generation, and they'll continue on. In terms of smaller, tactical scale AI, you've seen mild improvements in terms of the way they plan, and the way they challenge you in combat. But even more than that, they look better while navigating, and they look better while moving through the space. Navigation has increased across the board, and the way they interact with the environment has improved across the board.
Globally, across the entire games industry, you've seen more AI in games. Games like Assassin's Creed. We're getting to that level where you're moving through large throngs of crowds. You just didn't see that before; in previous generations, you'd step into a nightclub and it's be barren, devoid, but now you're actually seeing fully populated cityscapes.
So there have been big improvements in AI, in general, this past generation. There's only so close that you can get before you get to the uncanny valley, in terms of games, and I think we're approaching that right now. So, there will be a big leap at some point soon, when things such as the great small-scale tactical AI that a lot of games currently have, gets integrated with the crowd AI that you're seeing.
And you're also seeing other things, like people are experimenting with AI spawning, and spawning dynamically based on the player's condition and health, to try and tempo the game differently every time you play it.
You're talking about stuff like what Left 4 Dead is attempting.
MR: Yeah, exactly. We've talked about that here at the office, and that's some place where we see an aspect of AI developing.
More than just governing the way characters behave, and the way NPCs behave in the game, AI could be more impressively used, or maybe more effectively used, in things like that.
MR: In terms of making an individual AI encounter better? To some extent, we're reaching the same problems that we're having in graphics. Every generation, the technology gets better, right? We're supposed to be able to push more polygons; we're supposed to have more shaders on the screen. The problem with that being that you now have to have artists create X amount more content. So I think we're going to start seeing, in terms of graphics, you're starting to see a slowdown. Between the PS1 and the PS2, there was a huge leap, I feel, in graphical fidelity; and then less so between the PS2 and the PS3.
And I think we're kind of reaching that same point with AI. I mean, we can make the AI incredibly much more complicated, but it still requires animators to create thousands of animations, versus hundreds of animations; it requires the character artists to create far more detailed maps, and when you create far more detailed character maps, players expect full facial animation, which requires even more artist content.
And then the AI needs to know more about the world, in order to behave that much better, so that means that the level designers spend a lot more time carpeting a level with AI hints. So, one of the big events that I think we'll see soon is a lot more automation in the way that AI is placed in the game. Which doesn't necessarily mean a direct influence on the way the player perceives it, but it'll be much easier for the game makers to make the game, which means that they'll be able to focus more time on improving the AI in other areas.
Is some sort of proceduralization really what is required to stay in step?
MR: Oh, yeah. I definitely feel that. Left 4 Dead is, obviously, leading the charge on that. For instance, an example is the way nav meshes are made. AI moves throughout the world based on these nav meshes are placed; and it used to be, four or five years ago, that everyone placed nav meshes manually. You had to have a level designer go in there and put each little polygon into the world, and that polygon represented a space that the AI knew about, and can navigate through.
Now we're seeing these things happen more dynamically. Basically, there are companies out there that are developing tools that they're using internally, in which the level designer doesn't have to add a nav mesh at all; it just carpets the area with a nav mesh for any given AI size.
Same thing with animation: a huge part of what makes the AI look smart is the animations, and it used to be that animation systems were, you can play one single animation, and then it would play the next single animation; and if you wanted to play one animation that blended nicely into another animation, you either had to line the animations up perfectly, or you had to have some cheap generic blend.
Now we're seeing animation systems come online, both middleware packages and solutions that companies are making proprietary internally, that are much more complicated; they will actually take into account what the AI is doing, like whether or not he's attempting to lean into the turn, or whether or not he's slowing down from a massive run, and choose animations appropriately. So, that's a form of automation that, to have that same effect, would have taken a character artist and an animation engineer a great amount of time for each individual character, because of the large number of animations that we have.
So there's automation coming. I don't know if we'll see automation in terms of the behavioral aspects of AI, in terms of what the AI decides to do; more automation in "the AI has decided to do these things, and it can now do these things better".
Page 4 of 6