Coordinated Unit MovementBy Dave Pottinger
How many times have you been sitting in rush-hour traffic thinking, "Hey, I know where I want to go. And I'm sure everyone around me knows where they want to go, too. If we could just work together, I'll bet we would all get where we wanted to go a lot easier, faster, and without rear-ending each other"? As your frustration rises, you realize that impatient commuters aren't the most cooperative people. However, if you're a game player, uncooperative resource gatherers and infantry are probably even more frustrating than a real-life traffic jam. Figuring out how to get hundreds of units moving around a complex game map in real time - commonly referred to as pathfinding - is a tough task. While pathfinding is a hot industry buzzword, it's only half of the solution. Movement, the execution of a given path, is the other half of the solution. For real-time strategy games, this movement goes hand in hand with pathfinding. An axeman certainly needs a plan (as in, a path) for how he's going to get from one side of his town to the other to help stave off the enemy invasion. If he doesn't execute that plan using a good movement system, however, all may be lost.
Game Developer has already visited the topic of pathfinding in such past articles as "Smart Move: Path-Finding" by Brian Stout (October/November 1996) and "Real-Time Pathfinding for Multiple Objects" by Swen Vincke (June 1997). Rather than go over the same material, I'll approach the problem from the other side by examining the ways to execute a path that's already been found. In this article, I'll cover the basic components of an effective movement system. In a companion article in next month's Game Developer, I'll extend these basic concepts to cover higher-order movement and implementation. Though the examples in these articles focus mainly on a real-time strategy game, the methods I'll describe can easily be applied to other genres.
Movement Issues Facing Game Developers
Before we dive into coordinated unit movement, let's take a look at some of the movement issues facing game developers today. Most of these have to do with minimizing CPU load versus maximizing the accuracy and intelligence of the movement.
Moving one unit versus moving multiple units. Moving one unit is generally pretty simple, but methods that work well for one unit rarely scale up effortlessly for application to hundreds of units. If you're designing a system for hundreds of units, it will need to be very conservative in its CPU use.
Some movement features are CPU intensive. Very few games that move hundreds of units support advanced behavior such as modeling the acceleration and deceleration of these units. The movement of large ships and heavily armored units has a lot more realism with acceleration and deceleration, but that realism comes at a high cost in terms of extra CPU usage. The actual movement calculation becomes more complicated because you have to apply the time differential to the acceleration to create the new velocity. As we extend our movement system to handle prediction, we'll see that acceleration and deceleration complicate these calculations as well. Modeling a turn radius is also difficult because many pathfinding algorithms are not able to take turn radii into account at all. Thus, even though a unit can find a path, it may not be able to follow that path because of turn radius restrictions. Most systems overcome this deficiency by slowing the unit down to make a sharp turn, but this involves an extra set of calculations.
Different lengths for the main game update loop. Most games use the length of the last pass through update loop as some indication of how much time to simulate during the next update pass. But such a solution creates a problem for unit movement systems because these lengths vary from one update to the next (see Figure 1 below). Unit movement algorithms work much better with nice, consistent simulation intervals. A good update smoothing system can alleviate this problem quite a bit.
Figure 1. Varied update lengths cause units to move differing distances each update.
Sorting out unit collisions. Once units come into contact with one another, how do you get them apart again? The naïve solution is just never to allow units to collide in the first place. In practice, though, this requirement enforces exacting code that is difficult to write. No matter how much code you write, your units will always find a way to overlap. More importantly, this solution simply isn't practical for good game play; in many cases, units should be allowed to overlap a little. Hand-to-hand combat in Ensemble Studios' recent title Age of Empires should have been just such a case. The restriction for zero collision overlap often makes units walk well out of their way to fight other units, exposing them to needless (not to mention frustrating) additional damage. You'll have to decide how much collision overlap is acceptable for your game and resolve accordingly.
Map complexity. The more complex the map is, the more complicated and difficult good movement will be to create. As game worlds and maps are only getting more intricate and realistic, the requirement for movement that can handle those worlds goes up, too.
Random maps or controlled scenarios? Because you can't hard-code feasible paths, random maps are obviously more difficult to deal with in many cases, including pathfinding. When pathfinding becomes too CPU intensive, the only choice (aside from reducing map complexity or removing random maps) is to decrease the quality of the pathfinding. As the quality of the pathfinding decreases, the quality of the movement system needs to increase to pick up the slack.
Maximum object density. This issue, more than anything, dictates how accurate the movement system must be. If your game has only a handful of moving objects that never really come into contact with one another (as is the case with most any first-person shooter), then you can get away with a relatively simple movement system. However, if you have hundreds of moving objects that need to have collision and movement resolution on the scale of the smallest object (for example, a unit can walk through a small gap between two other units), then the quality and accuracy requirements of your movement system are dramatically raised.
Simple Movement Algorithm
Let's start with some pseudo code for a simple, state-based movement algorithm (Listing 1). While this algorithm doesn't do much more than follow a path and decide to find a new path when a collision is found, it does work equally well for both 2D and 3D games. We'll start in a given state and iterate until we can find a waypoint to move towards. Once we find that point, we break out of the loop and do the movement. There are three states: WaitingForPath, ReachedGoal, and IncrementWaypoint. The movement state for a unit is preserved across game updates in order to allow us to set future events, such as the "automatic" waypoint increment on a future game update. By preserving a unit's movement state, we lessen the chance that a unit will make a decision on the next game update that counters a decision made during the current update. This is the first of several planning steps that we'll introduce.
We assume that we'll be given a path to follow and that the path is accurate and viable (meaning, no collisions) at the time it was given to us. Because most strategy games have relatively large maps, a unit may take several minutes to get all the way across the map. During this time, the map can change in ways that can invalidate the path. So, we do a simple collision check during the state loop. At this point, if we find a collision, we'll just repath. Later on, we'll cover several ways to avoid repathing.
Listing 1. Movement Algorithm in Pseudocode.
Top of movement state loop:
If we're in IncrementWaypoint state:
Increment our waypoint.
If we're on a patrol
Grab the next waypoint as defined by the patrol direction.
Set state to WaitingForPath.
If we're out of waypoints
Set state to ReachedGoal.
Set state to WaitingForPath.
If we're in ReachedGoal state:
Make the appropriate notifications (if any).
We're done. Stop the walking animation. Exit function.
If we're in WaitingForPath state:
Find a path and save it.
If we could not find one
We've failed. Exit function.
Calculate the direction we need to head in to get to our desired waypoint.
Modify that direction by any limitations such as turn radius.
Using that new direction, calculate where we'll end up after this move.
If that new position causes a collision
Set state to WaitingForPath.
Jump back to the top of the loop.
Using the current and future position:
If we're closer to the waypoint before moving
Set state to IncrementWaypoint
Go back to top of loop.
If we're going to jump over the waypoint during this move
Set state to IncrementWaypoint.
Break out of loop.
Set the accelerations accordingly.
Do the actual move.
Set or update any animation hooks that we might have.
Update our predicted positions
The basic goal of any collision determination system is to find out if two units have collided. For the time being, we'll represent all collisions as two-entity collisions. We'll cover compound collisions (collisions involving three or more entities) next month. Once a collision is found, each entity needs to know about the collision in order to make appropriate movement decisions.
Basic collision determination for most strategy games consists of treating all units as spheres (circles in 2D) and doing a simple spherical collision check. Whether or not such a system is sufficient depends on the specific requirements of a game. Even if a game implements more complex collision - such as oriented bounding boxes or even low-level polygon to polygon intersection tests - maintaining a total bounding sphere for quick potential collision elimination will usually improve performance.
There are three distinct entity types to take into account when designing a collision system: the single unit, a group of units, and a formation (see Figure 2 below). Each of these types can work well using a single sphere for quick collision culling (elimination of further collision checks). In fact, the single unit simply uses a sphere for all of its collision checking. The group and the formation require a bit more work, though.
For a group of units, the acceptable minimum is to check each unit in the group for a collision. By itself, this method will allow a non-grouped unit to sit happily in the middle of your group. For our purposes, we can overlook this discrepancy, because formations will provide the additional, more rigid collision checking. Groups also have the ability to be reshaped at any time to accommodate tight quarters, so it's actually a good idea to keep group collision checking as simple as possible.
A formation requires the same checks as a group, but these check must further ensure that there are no internal collisions within the formation. If a formation has space between some of its units, it is unacceptable for a non-formed unit to occupy that space. Additionally, formations generally don't have the option to reshape or break. However, it's probably a good idea to implement some game rules that allow formations to break and reform on the other side of an obstacle if no path around the obstacle can be found.
For our system, we'll also keep track of the timing of the collision. Immediate collisions represent collisions currently existing between two objects. Future collisions will happen at a specified point in the future (assuming neither of the objects changes its predicted movement behavior). In all cases, immediate collisions have a higher resolution priority than future collisions. We'll also track the state of each collision as unresolved, resolving, or resolved.
Discrete vs. Continuous Simulation
Most movement algorithms are discrete in nature. That is, they move the unit from point A to point B without considering what might be between those two points, whereas a continuous simulation would consider the volume between the two points as well. In a lag-ridden Internet game, fast moving units can move quite a distance in a single game update. When discrete simulations are coupled with these long updates, units can actually hop over other objects with which they should have collided. In the case of a resource gathering unit, no one really minds too much. But players rarely want enemy units to be able to walk through a wall. While most games work around this problem by limiting the length of a unit's move, this discrete simulation problem is relatively easy to solve (see Figure 3 below).
One way to solve the problem is to sub-sample each move into a series of several smaller moves. Taking the size of the moving unit into account, we make the sampling interval small enough to guarantee that no other unit can fit between two of the sample points. We then run each of those points through the collision determination system. Calculating all of those points and collisions may seem overly expensive, but later on we'll see a potential way to offset most of that cost.
Another method is to create what we'll call a move line. A move line represents the unit's move as a line segment starting at point A and ending at point B. This system creates no extra data, but the collision check does have an increase in complexity; we must convert from a simple spherical collision check to a more expensive calculation that involves finding the distance from a point to a line segment. Most 3D games have already implemented a fast hierarchical system for visible object culling, so we can reuse that for collision culling. By quickly narrowing down the number of potential collisions, we can afford to spend more time checking collisions against a small set of objects.
Now that we have a simple movement algorithm and a list of unit collisions, what else do we need to get decent unit cooperation? Position prediction.
Predicted positions are simply a set of positions (with associated orientations and time stamps) that indicate where an object will be in the future (see Figure 4 below). A movement system can calculate these positions using the same movement algorithm that's used to move the object. The more accurate these positions are, the more useful they are. Position prediction isn't immediately free, though, so let's look at how to offset the additional CPU usage.
The most obvious optimization is to avoid recalculating all of your predicted positions at every frame. A simple rolling list works well (see Figure 5 below); you can roll off the positions that are now in the past and add a few new positions each frame to keep the prediction envelope at the same scale. While this optimization doesn't get rid of the start-up cost of creating a complete set of prediction positions the first time you move, it does have constant time for the remainder of the movement.
Figure 4. A closer look at the predicted positions.
The next optimization is to create a prediction system that handles both points and lines. Because our collision determination system already supports points and lines, it should be easy to add this support to our prediction system. If a unit is traveling in a straight line, we can designate an enclosed volume by using the current position, a future position, and the unit's soft movement radius. However, if the object has a turn radius, things get a little more complicated. You can try to store the curve as a function, but that's too costly. Instead, you're better off doing point sampling to create the right predicted points (see Figure 6 below). In the end, you really want a system that seamlessly supports both point and line predictions, using the lines wherever possible to cut down on the CPU cost.
Figure 5. Rolling list of predicted positions.
The last optimization we'll cover is important and perhaps a little nonintuitive. If we're going to get this predicted system with as little overhead as possible, we don't want to duplicate our calculations for every unit by predicting its position and then doing another calculation to move it. Thus, the solution is to predict positions accurately, and then use those positions to move the object. This way, we're only calculating each move once, so there's no extra cost aside from the aforementioned extra start-up time.
Figure 6. Using predicted positions with a turn radius.
In the actual implementation, you'll probably just pick a single update length to do the prediction. Of course, it's fairly unlikely that all of the future updates will be consistent. If you blindly move the unit from one predicted position to the next without any regard to what the actual update length currently is, you're bound to run into some problems. Some games (or some subset of objects in a game) can accept this inaccuracy. Those of us developing all the other games will end up adding some interpolation so that can quickly adjust a series of predicted points that isn't completely accurate. You also need to recognize when you're continually adjusting a series of predicted positions so that you cut your losses and just recalculate the entire series.
Most of the rest of the implementation difficulties arise from the fact that we use these predicted positions in collision detection just as we do for the object's actual current position. You should easily see the combinatorial explosion that's created by comparing predicted positions for all units in a given area. However, in order to have good coordinated unit movement, we have to know where units are going to be in the near future and what other units they're likely to hit. This takes a good, fast collision determination system. As with most aspects of a 3D engine, the big optimizations come from quickly eliminating potential interactions, thus allowing you to spend more CPU cycles on the most probable interactions.
Unit to Unit Cooperation
We've created a complex system for determining where an object is going to be in the future. It supports 3D movement, it doesn't take up much more CPU time than a simple system, and it provides an accurate list of everything we expected a unit to run into in the near future. Now we get to the fun part.
If we do our job well, most of the collisions that we must deal with are future collisions (because we avoid most of the immediate collisions before they even happen). While the baseline approach for any future collision is to stop and repath, it's important to avoid firing up the pathfinder as much as possible.
This set of collision resolution rules is a complete breakdown of how to approach the problem of unit-to-unit collision resolution (from a unit's frame of reference).
Case 1. If both units are not moving:
- If we're the lower-priority unit, don't do anything of our own volition.
- If we're the higher-priority unit, figure out which unit (if any) is going to move and tell that unit to make the shortest move possible to resolve the hard collision. Change the collision state to resolving.
Case 2. If we're not moving, and the other unit is moving, we don't do anything.
Case 3. If we're moving and the other unit is stopped:
- If we're the higher-priority unit, and the lower priority unit can get out of the way, calculate our "get-to point" (the point we need to get to in order to be past the collision) and tell the lower-priority unit to move out of our way (see Figure 7 below). Change the collision state to resolving.
Figure 7. Resolving a collision between a moving unit and a stopped unit.
- Else, if we can avoid the other unit, avoid the other unit and resolve the collision.
- Else, if we're the higher-priority unit and we can push the lower-priority unit along our path, push the lower priority-unit. Change the collision state to resolving.
- Else, stop, repath, and resolve the collision.
Case 4. If we're moving and the other unit is moving:
- If we're the lower-priority unit, don't do anything.
- If collision with hard radius overlap is inevitable and we're the higher-priority unit, tell the lower-priority unit to pause, and go to Case 3.
- Else, if we're the higher-priority unit, calculate our get-to point and tell the lower-priority unit to slow down enough to avoid the collision.
- If we're the unit that's moving in order to resolve a Case 1 collision and we've reached our desired point, resolve the collision.
- If we're the Case 3.1 lower-priority unit and the higher- priority unit has passed its get-to point, start returning to the previous position and resolve the collision.
- If we're the Case 3.1 higher-priority unit, wait (slow down or stop) until the lower-priority unit has gotten out of the way, then continue.
- If we're the Case 3.3 higher-priority unit and the lower-priority unit can now get out of the way, go to Case 3.1.
- If we're the Case 4.3 lower-priority unit and the higher-priority unit has passed its get-to point, resume normal speed and resolve the collision.
One of the key components of coordinated unit movement is to prioritize and resolve disputes. Without a solid, well-defined priority system, you're likely to see units doing a merry-go-round dance as each demands that the other move out of its way; no one unit has the ability to say no to a demand. The priority system also has to take the collision severity into account. A simple heuristic is to take the highest-priority hard collision and resolve down through all of the other hard collisions before considering any soft collisions. If the hard collisions are far enough in the future, though, you might want to spend some time resolving more immediate soft collisions. Depending on the game, the resolution mechanism might also need to scale based on unit density. If a huge melee battle is creating several compound hard collisions between some swordsmen, you're better served spending your CPU time resolving all of those combat collisions than resolving a soft collision between two of your resource gatherers on a distant area of the map. An added bonus to tracking these areas of high collision density is that you can influence the pathfinding of other units away from those areas.
Planning is a key element of unit cooperation. All of these predictions and calculations should be as accurate as possible. Inevitably, though, things will go wrong. One of the biggest mistakes we made with the Age of Empires' movement was to make every decision within a single frame of reference. Every decision was always made correctly, but we didn't track that information into future updates. As a result, we ended up with units that would make a decision, encounter a problem during the execution of that decision, and then make a decision that sent them right back on their original path, only to start the whole cycle over again the next update. Planning fixes this tautology. We keep around the old, resolved collisions long enough (defined by some game-specific heuristic) so that we can reference them should we get into a predicament in the future. When we execute an avoidance, for example, we remember what object it is that we're avoiding. Because we'll have created a viable resolution plan, there's no reason to do collision checking with the other unit in the collision unless one of the units gets a new order or some other drastic change takes place. Once we're done with the avoidance maneuver, we can resume normal collision checking with the other unit. As you'll see next month, we'll reuse this planning concept over and over again to accomplish our goals.
Simple games are a thing of the past; so is simple movement. We've covered the basic components necessary for creating a solid, extensible movement system: a state-based movement algorithm, a scalable collision determination system, and a fast position prediction system. All of these components work together to create a deterministic plan for collision resolution.
Next month, we'll extend these concepts to cover higher-order movement topics, such as group movement, full-blown formation movement, and compound collision resolution. I'll also go into more detail about some implementation specifics that help solve some of the classic movement problems.
For Further Info
- Take a look at Craig W. Reynolds' Boids work at http://hmt.com/cwr/boids.html.
- Steven Woodcock's Game AI web site is located at http://www.cris.com/~swoodcoc/ai.html.
- Also see Patrick Winston. Artificial Intelligence, 3rd ed. (Addison-Wesley, 1993.)
After several close calls, Dave managed to avoid getting a "real job" and joined Ensemble Studios straight out of college a few years ago (just in time to the do the computer-player AI for a little game called AGE OF EMPIRES). These days, Dave spends his time either leading the development of Ensemble Studios' engines or with his lovely wife Kristen. Dave can be reached at [email protected].
Basic DefinitionsMovement. The execution of a path. Simple movement algorithms move a unit along a path, while more complex systems check collisions and coordinate unit movement to avoid collisions and allow otherwise stuck units to move.
Pathfinding. The act of finding a path (a planned route for a unit to get from point A to point B). The algorithm used can be anything from a simple exhaustive search to an optimized A* implementation.
Waypoint. A point on a path that a unit must go through to execute the path. Each path, by definition, has one waypoint at the start and one waypoint at the end.
Unit. A game entity that has the ability to move around the game map.
Group. A general collection of units that have been grouped together by the user for convenience (usually to issue the same order to all of the units in the group). Most games try to keep all of the units in a group together during movement.
Formation. A more complex group. A formation has facing (a front, a back, and two flanks). Each unit in the formation tries to maintain a unique relative position inside the formation. More complex models provide an individualized unit facing inside of the overall formation and support for wheeling during movement.
Hard Movement Radius. A measure of the volume of a unit with which we absolutely do not allow other units to collide.
Soft Movement Radius. A measure of the volume of a unit with which we would prefer not to collide.
Movement Prediction. Using the movement algorithms to predict where a unit will be at some point in the future. A good prediction system will take acceleration and deceleration into account.
Turn Radius. The radius of the tightest circle a unit can turn on at a given speed.
Return to the full version of this article
Copyright © UBM Tech, All rights reserved