Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Gamasutra: The Art & Business of Making Gamesspacer
GDC 2005 Proceeding: Handling Complexity in the Halo 2 AI
arrowPress Releases
April 19, 2021
Games Press
View All     RSS

If you enjoy reading this site, you might also want to check out these UBM Tech sites:


GDC 2005 Proceeding: Handling Complexity in the Halo 2 AI

March 11, 2005 Article Start Previous Page 2 of 2

Custom behaviors

We can take a similar approach when considering the problem of character-type to character-type variety. Presumably in fashioning the repertoire of a new character, we would like high-level structure to remain the same - the kind of structure shown in Figure 1 - but the details of the transition-triggers to vary. In some cases we will simply tweak behavior parameters to get the desired effect. In other cases where more specific triggers are needed, we will us custom behaviors. Like stimulus behaviors, custom behaviors are inserted into the tree, in this case in a preprocess step, so that the final prioritized list of children does not need to be recomputed every time. In this way, we can add any number of character-specific impulses, behaviors or behavior subtrees - and it is through these additions that a good amount of the personality of the characters comes through (for example, grunts, the cowardly creatures of Halo 2, will have an inordinate number of retreat impulses, whereas marines have extra action-coordination behaviors, to get them to work together more cohesively).

This approach - of taking a solid base and then adding stuff onto it - is one we will return to:

Design principle #4: Take something that works and then vary from it

Of course, if a truly fundamentally different brain is necessary, then starting from a common base may be impractical. Such was the case, in Halo 2, with the flood swarm characters, which had such wildly different basic needs from ordinary characters that they had a behavior DAG that was ENTIRELY custom.

Memory and Memory

With large trees, we face another challenge: storage. In an ideal world, each AI would have an entire tree allocated to it, with each behavior having a persistent amount of storage allocated to it, so that any state necessary for its functioning would simply always be available. However, assuming about 100 actors allocated at a time about 60 behaviors in the average tree, and each behavior taking up about 32 bytes of memory, this gives us about 192K of persistent behavior storage. Clearly, as the tree grows even further this becomes even more of a memory burden, especially for a platform like the Xbox.

We can cut down on this burden considerably if we note that in the vast majority of cases, we are only really interested in a small number of behaviors - those that are actually running (the current leaf, its parent, it grandparent and so on up the tree). The obvious optimization to make is to create a small pool of state memory for each actor divided into chunks corresponding to levels of the hierarchy. The tree becomes a free-standing static structure (i.e. is not allocated per actor) and the behaviors themselves become code fragments that operate on a chunk. (The same sort of memory usage can be obtained in an object oriented way if parent behavior objects only instantiate their children at the time that the children are selected. This was the approach taken in [Alt04]). Our memory usage suddenly becomes far more efficient: 100 actors times 64 bytes (an upper bound on the amount behavior storage needed) times 4 layers (in the case of Halo 2), or about 25K. Very importantly, this number only grows with the maximum depth of the tree, not the number of behaviors.

This leaves us with another problem however, the problem of persistent behavior state. There are numerous instances in the Halo 2 repertoire where behaviors are disallowed for a certain amount of time after their last successful performance (grenade-throwing, for example). In the ideal world, this information about "last execution time" would be stored in the persistently allocated grenade behavior. However, as that storage in the above scheme is only temporarily allocated, we need somewhere else to store the persistent behavior data.

There is an even worse example - what about per-target persistent behavior state? Consider the search behavior. Search would like to indicate when it fails in its operation on a particular target. This lets the actor know to forget about that target and concentrate its efforts elsewhere. However, this doesn't preclude the actor going and searching for a different target - so the behavior cannot simply be turned off once it has failed.

Memory - in the psychological sense of stored information on past actions and events, not in the sense of RAM - presents a problem that is inherent to the tree structure. The solution in any world besides the ideal one is to create a memory pool - or a number of memory pools - outside the tree to act as its storage proxy.

When we consider our memory needs more generally, we can quickly distinguish at least four different categories:

  • Per-behavior (persistent): grenade throws, recent vehicle actions
  • Per-behavior (short-term): state lost when the behavior finishes
  • Per-object: perception information, last seen position, last seen orientation
  • Per-object per-behavior: last-meleed time, search failures, pathfinding-to failures

Figure 5: the anatomy of memory

The first type is the easiest - we can simply define named variables inside the actor object that particular behaviors know how to read or manipulate - needless to say, the number of behaviors that actually NEED to keep persistent state is best kept to a minimum) and the state they DO keep is best kept small (otherwise we simply run into the same problem of exploding memory usage). The second type is the type we have been discussing. This is the volatile behavior state that is allocated and deallocated as the particular behavior starts and stops.

Things become more complicated with the third and fourth types of memory. What they suggest is that there needs to be an actor-internal reference representation for each target the actor can consider. Indeed, having such a representation has a lot of benefits.

The benefits on the perception side have already been discussed at length in [Greisemer02] and [Burke01]. In Halo 2, these representations are called "props", and their primary function is as a repository for perceptual information on objects in the world. Having this state information (position, orientation, pathfinding location, etc.) distinct from the actual world-state and gated by the actor's perception filter (an actor should not be able to see through walls, for example) allows the two representations to occasionally diverge - thus the actor can believe things that are not true, and we now enter the realm of AI that can be tricked, confused, surprised, disappointed, etc. It is on the basis of this believed state, of course, that the actor will be making most of its decisions.

What is new here is that there are benefits as well on the behavior side, as the "prop" can act as a convenient storage location for per-object per-behavior memory. Keeping this behavior state in the same location as the perception history also allows us to conveniently correlate the two, thus making it efficient to answer questions like "have I already searched for the enemy I'm hearing?" As before, the fewer behaviors that actually need to keep per-object persistent storage, the better, and that storage needs to be kept small.

The "prop" representation is one of the cornerstones of the Halo 2 AI, and essentially forms the entirety of the AI's knowledge model - its view and understanding of the world around it. This model unfortunately remains extremely rudimentary. It is after all, simply a flat list of object references, with no form of spatial-relation (is-next-to or is-on-top-of), or structural-relation information (is-a-part-of) and very little time-based analysis (a series of position-reading suggesting a certain trajectory, for example). Furthermore, a major limitation of our implementation is that only potential targets are allowed on the list, like bipeds and vehicles, thus excluding other potentially behavior-relevant objects, such as pathfinding obstacles, machines, elevators and weapons.

Nonetheless, giving AI an internal mental image of its world not only results in interesting and more "realistic" behavior, it also allows us to overcome one of the major storage problems associated with the behavior tree.

Designer Control

Let us consider complexity from another point of view, namely, that of usability and directability. In this case, we are concerned not with whether the AI is acting believably, but rather with how easy it is for the users of the AI system - the level designers - to make use of the system to put together a dramatic and fun experience for the player.

This may seem like a dramatic shift in pace, but keep in mind that this is an area that is equally beset by the problems of complexity. Consider, for example, the problem of parameter creep. There are many different types of characters. There are many different behaviors that each of them can execute. Each of those behaviors is controlled by a small number of parameters. Combine these factors and what we have is an explosion of inscrutable floats. Which one of the hundreds of potential numbers is it that is making a particular enemy "feel wrong"? It is very difficult to tell indeed.

The designer needs to tell the AI what to do - but at what level? Clearly we are not interested in a scripting system in which the designer specifies EVERYTHING the AI does and where it goes - that would be too complex. We do need, however, the AI to be able to handle high-level direction: direct them to behave generally aggressively, or generally cowardly. Similarly, when it comes to position-control, we want the direction to be vague: "occupy this general area".

As in the preceding sections, the solution to all these problems lies in a few extremely useful representations.

Position Direction: Orders and Firing Positions

As in the case of Halo 1 (see [Greisemer02]), AI position is controlled through designer-placed firing positions. Firing positions are simply discrete points which the AI can consider as destinations when performing spatial behaviors. For example, if a target takes cover behind an obstacle, the AI can try to uncover the target by going to a firing position which has a clear line of sight to the target's current presumed position ("presumed" because the target may have moved). Similarly when running the fight_behavior, an appropriate firing position is chosen from which to shoot at the target (a position which again has a clear line of sight, and which also puts the AI at an appropriate range from the target based on the kind of weapon being used and other factors).

Firing positions become an extremely useful control mechanism when we begin to script the set of firing positions available to the AI at a given time. In Halo1, AIs were grouped into encounters, which also contained a set of firing positions. Various subsets of this set were made available to the AI depending on the state of their encounter (have many of their allies been killed? Are they winning? Are they losing? This was a mapping that was created by a designer). In Halo 2, the basic ideas remain the same, although the representations are different. Instead of having a single encounter structure, we now have squads, groupings of AI and areas, groupings of firing positions. Forming the mapping between the two is a new structure called the order.

Fundamentally, an order is simply a reference to a grouping of firing positions. When the order is "assigned" to a squad, the firing positions referenced by the order become available to the AI in the squad. This is a simple mechanism, which is made slightly more complex by the fact that orders also incorporate some rudimentary scripting functionality that allows for automatic transitioning between orders. A set number of possible trigger-types are available to the designers (for example, "have x or more squad members been killed?" "has the squad seen the player?"). When a trigger condition is satisfied, the squad is assigned a new order associated with that trigger. Thus, designers can script the general flow of a battle using very simple high-level representations.

Behavior Direction: Orders and Styles

The idea behind the term "order" is that it should indeed embody the same sort of level of direction that might an order given by a company commander to his soldiers. "We're going to take that hill!" "We're going to occupy that bunker and hole up until the cavalry arrives." Most of these orders are of the "go here and do this" variety, or, to be more precise, "go here and behave this way."

So far we have only described how our order representation encodes the first part of that directive. However, the order is a fantastically useful level of representation for the second part as well. In Halo 2, designers can, for example, allow or disallow vehicle use, engage stealth and control the rules of engagement (don't-fire-until-fired-upon, versus free-for-all) through special-purpose flags contained in the order.

Figure 6: Orders and styles. A squad starting with the order named "initial" can transition to either the "push forward" or "fallback" orders, depending on how the battle goes. Each order references its own set of firing positions and its own style, which indicates whether to behavior "aggressively", "defensively", "recklessly" etc.

Orders influence behavior in another important way: they reference a style. The style represents the final and perhaps most direct mechanism through which we can control the structure of an AI's behavior tree. The style is really just a list of allowed and disallowed behaviors. Just as a behavior cannot be considered if its tag does not match the actor's current state, a behavior cannot be considered unless it is explicitly allowed by the order's style.

Given the directness of the style mechanism, it is a very powerful and very dangerous tool. In particular it is possible to give the AI a style which will literally not allow the AI to run ANY behavior, or which will leave the AI in such a debilitated state that its behavior appears essentially random. For these reasons, styles are not generally edited per-encounter. Instead the designers have a small style library to choose from when setting up an order (each style in the library having gotten the seal of approval from both the lead designer and the AI programmer). But this caveat aside, styles allow for some interesting variability. Defensive styles do not allow charge or search behaviors. Aggressive styles do not allow self-preservation. Noncombatant style would not allow any combat behaviors at all, instead allowing only idle or retreat behaviors. Styles also allow the designer to skew some of the parameters controlling behavior one way or the other (for example, allowing characters to flee more easily in a cowardly style).

Orders and styles are one of the principle ways in which we allow for encounter-to-encounter variability in the gameplay. Using these two tools, the designer can make the same AI feel and play quite differently from one moment to the next - presumably in accordance with the dramatic needs of the story and the level progression.

Parameter creep

Figure 7: the character hierarchy

We will discuss a final problem related to complexity that faces our designers, a problem that we have already mentioned, namely that of parameter creep. The tendency in first authoring a behavior is to allow great customizability through the use of any number (usually about three to five) of designer-edited parameters. However, take three parameters, times 115 or so behaviors, times 30 or so character types (including fundamental types and variants, such as red versus white elites) and we have about 10,350 different numbers we need to maintain!

Clearly this is not a tenable situation. We can greatly reduce this burden on the designer, however, if we remember our design point #4: start with something that works well and then vary.

The greatest source of character types in terms of sheer number is the existence of character variants. A white elite fights more aggressively and is tougher than an ordinary red elite, and so is a different character type. In all other respects however, the white elite and the red elite are identical. It is therefore a waste to have to create an entirely new full set of behavior parameters when we're really only interested in the fight and vitality parameters. What we therefore have, is a system which allows us to define only those parameters that are truly distinctive to a character, and then to rely on a "parent" character for the rest.

All character and behavior parameters are contained in a .character file. This file gives a character name and also specifies which geometric model to use for the body. When a designer places an AI, he or she first chooses the .character file to use.

The character file is not, however, a flat list of parameters. It is instead a list of blocks of parameters, each block grouped in a logical way to control certain aspects of behavior - the self-preservation block, the combat block, the weapon-firing block, etc. Not all blocks need be present in all character files. When an AI is attempting to run a particular behavior only to find the relevant block missing from its character file, it looks instead in the referenced parent character file. If that file does not have the block, then its parent is examined, and so on. Thus a character hierarchy is formed, in which each child defines only significant variations from its parent. The root of the entire tree is the generic character, which should define "reasonable" parameter values for all blocks.


The "design principles" listed in this paper are a rather transparent attempt to impose a structure on what might otherwise appear to be a random grab-bag of ideas - interesting, perhaps, in and of themselves but not terribly cohesive as a whole. In conclusion, we only hope only to drive home two major points: first, that complexity is paid for in many ways, including run-time, implementation, user-experience and usability. And second, that key to tackling the complexity problem always is the question of representation. All of the tricks described here in some way or another involve the manipulation of a convenient representation structure - be it the behavior DAG, the order/style system or the character hierarchy. This is a fitting "realization" for an AI paper, since it is, of course, nothing more than the recapitulation of an idea that academic AI has known for a long time: that hard problems can be rendered trivial through judicious use of the right representation.


[Alt04] G. Alt, "The Suffering: A Game AI Case Study", in the proceedings of the Challenges in Game AI Workshop, Nineteenth National Conference on Artificial Intelligence (AAAI), 2004

[Burke01] R. Burke, D. Isla, M. Downie, Y. Ivanov, and B. Blumberg, "CreatureSmarts: The Art and Architecture of a Virtual Brain," in the proceedings of the Game Developers Conference, San Jose, CA, 2001.

[Greisemer02] J. Greisemer, C. Butcher, "The Integration of AI and Level Design in Halo", in the proceedings of the Game Developers Conference, San Jose, CA, 2002

[Lenat95] D. Lenat, "Cyc: A Large-Scale Investment in Knowledge Infrastructure", Communications of the ACM 38, no. 11, November 1995

[Stork99] D. Stork, "The Open Mind Initiative", IEEE Expert Systems and Their Applications, pp. 16-20, May/June 1999


Article Start Previous Page 2 of 2

Related Jobs

Concurrents, Inc
Concurrents, Inc — San Carlos, California, United States

Software Engineer/Game Engine Developer
Square Enix Co., Ltd.
Square Enix Co., Ltd. — Tokyo, Japan

Experienced Game Developer
Sony PlayStation
Sony PlayStation — San Francisco, California, United States

Sr. Product Manager, Player Engagement & Social Experiences
Playco — APAC, Remote, Remote

Senior Product Manager

Loading Comments

loader image