While the navmesh and navmap give the AI an understanding of the environment, they need a way to evaluate the space and pick specific locations that are best suited for the situation. For this we use our Combat Behavior system. This is a weighted influence map of positions where the best rated point is chosen, evaluated at a rate of once per second per AI. The overall set of points that the system considers is based on a collection of several things:
- A fixed grid of points that applies its self over the world and is filtered by the navmesh
- Cover points
- Zone markers place a point at their center (I'll cover these a bit later)
How the weighting of these points is determined is based on parameters we set up in a script. Here's an example of one from Uncharted 2 with comments on what the parameters do:
:dist-target-attract 15.0 ;; stay this close to your target
:dist-enemy-repel 5.0 ;; do not get closer than this to your target
:dist-friend-repel 2.0 ;; do not get this close to your friends
:cover-weight 10.0 ;; prefer points that are cover
:cover-move-range 15.0 ;; how far you'll move to get to cover
:cover-target-exclude-radius 8.0 ;; ignore covers this close to your target
:cover-sticky-factor 1.0 ;; prefer a cover you're already in
:flank-target-front 0.0 ;; prefer to be in front of your target
:flank-target-side 0.0 ;; prefer to be on the side of your target
:flank-target-rear 0.0 ;; prefer to be at the rear of your target
:target-visibility-weight 5.0 ;; prefer points that can see your target
While these are not all of the values at our disposal, they're some of the most used. This is a trimmed down example of our mid range Combat Behavior. To break this down, the AI using this will want to stay between 5 and 15 meters away from their target, while trying to stay 2 meters away from their friends. They prefer to be in cover and will move up to 15 meters to get to cover, but they do not want to pick a cover spot that's within 8 meters from their target. They will also prefer a cover they've already been in, as well as picking points that can see their target.
Combat Behaviors allow us a lot of flexibility in how we want the AI to behave. By creating a library and switching them as needed we can account for most any type of scenario we need to set up. They're also very handy when we have large combat spaces where the player can take multiple routes as they're not reliant on the player taking a linear path since they're always evaluating the situation and picking the best spot according to the set parameters. But there are some downsides to be aware of that we've had to solve.
Because this is a fuzzy system, errant readings can occur that you have to be careful to figure out. For example, based on conditions the system might pick a different point for one evaluation, and then go back to the previous point the next evaluation. Some filtering has to be done to get the AI to respond accordingly else they can end up bouncing around needlessly. There're some additional parameters we can put into the behavior to help increase the stability and help reject this kind of behavior.
Another issue that's very important to keep in mind is that the path finding system is completely separate from the Combat Behavior system. The behavior only picks the destination, while the path finding figures out how to get there.
For example, if the behavior has an enemy repel distance of 5 meters, the AI could still travel within 5 meters of their target to get to their location. Also the system could potentially pick points that require an outlandish path to get to (i.e. the point is just on the opposite side of a wall that requires the AI to have to run a long, winding distance to get to).
While we've come up with some methods to solve these kinds of cases, it's still an issue we try to make better. This is an example of a downside of having separate, discrete systems where we have to make them share data better.
Another tool we have at our disposal that works in conjunction with the Combat Behaviors are zones. A Zone is specified by a specific point in the level called a Zone Marker, and it contains a radius and height value to define the boundaries of the zone. Any AI that has a zone set will only consider Combat Behavior positions that land within the zone. Zones can be set dynamically at any time, including being set onto objects (such as the player). This allows us to focus the AI to specific locations and objects as needed based on combat conditions.
The last system the AI use to formulate their decisions is their sensory systems (vision and hearing). One of the things we wanted to change from Uncharted 1 was to keep the AI from always knowing where their target is. We needed this to introduce the new stealth gameplay mechanics, as well as to make the AI seem less omniscient during combat.
The vision system we came up with is based on a set of cones and timers. There's an outer peripheral cone and an inner direct cone. Each cone has 4 parameters that define it, they are the vertical angle, the horizontal angle, the range, and the acquire time.
The angle values determine the shape of the cone, while the range determines how far out it extends (beyond which the AI cannot see). The acquire time determines how long an enemy has to be within the vision cone to trip its timer. This value is also scaled along the range of the cone, so the outermost edge of the range will be the full value, and it linearly becomes shorter the closer it gets.
The two cones and their timers work together when it comes to spotting their targets. When an enemy enters the peripheral cone, the timer starts to count down (and remember this is scaled based on distance). If the timer expires, this signals the AI that they think they might have seen something. They then turn towards the disturbance and try to get the location of the disturbance within their direct vision cone. If the enemy target enters the direct cone, its timer will start to count down. If the direct cone's timer expires, then this identifies the target to the AI.
This system allows the AI the ability to lose sight of their contacts and allows the player the ability to be more strategic when fighting them. It also adds to the feeling of them behaving more naturally like a human would. A great example of this is when the player enters a long piece of low cover at one end when the AI is attacking them.
While in cover the player then travels to the opposite end so they're not seen while doing this. The AI will still be shooting at where they last saw the player, hence allowing the player some extra time to get their shots off when they pop out of cover as the AI will have to re-acquire and turn towards them.
A single vision definition for the AI comprises 3 sets of cones, with each set being used for different contextual circumstances during gameplay. The first set is used for ambient situations, the second set is for a special condition called preoccupied, and the third set is used during combat.
Ambient situations are when the AI has no knowledge of an enemy in the area. This is primarily the state the AI is in when starting a stealth encounter. Ambient is set up to have a smaller direct cone and longer acquire times to allow the player the ability to sneak around the environment. Preoccupied is a special case that can be used during Ambient states.
The idea is that if the AI is doing something that they would be focusing intently on, then they're not going to be paying as much attention to what's going on around them. These are set up to be pretty short in distance and long in acquire times. Combat is the set used when the AI is engaged with their targets. It is set up so that the acquire times are much shorter, and the direct cone is much larger to account for a heightened awareness.
So there's a very in-depth technical look at how we use a bunch of our AI systems, which is the end of this installment. The next installment will be dealing with gameplay techniques and philosophies we used, as well as some lessons learned. So until next time (which hopefully won't be as long!)