Gamasutra: The Art & Business of Making Gamesspacer
Massively Multiplayer Game Development 2: Architecture and Techniques for an MMORTS

Printer-Friendly VersionPrinter-Friendly Version
View All     RSS
April 23, 2014
arrowPress Releases
April 23, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM TechWeb sites:


 
Massively Multiplayer Game Development 2: Architecture and Techniques for an MMORTS

June 13, 2005 Article Start Page 1 of 2 Next
 

Introduction

In a massive multiplayer online real-time strategy game (MMORTS), thousands of players share the same contiguous virtual space. Each player controls dozens of units (in contrast with MMORPGs) and can battle with/against any number of players simultaneously. The players need to receive information in the immediate surrounding of all their units. As these units can be very far apart and see many other players with thousands of units, the amount of information that must be transferred to the client machines to keep their world state consistent is far greater than those required in the MMORPG genre.

Dealing with this tremendous flow of information poses one of the greatest challenges of the MMORTS game genre [Jaecheol01]. Naïvely, updating all the information (in excess of 50KB/sec) around each of the players’ units all the time is impractical in today’s low-bandwidth modems. Even if every end user had equipment capable of reaching this kind of throughput, the cost of such communication load will be tremendous to the game provider.

The new game genre also poses new demands in other areas such as content creation and data presentation to the user. For example, a new level of demand on computational resources is needed for Artificial Intelligence (AI), both for players and for the computer adversaries needed to act against these players, as typical virtual worlds might include millions of AI units. Moreover, the large maps needed to house thousands of players are problematic for standard path-finding techniques.

In this article, we describe the algorithmic basis needed for implementing an MMORTS game capable of sustaining hundreds of units for each player, all of which can affect their surrounding. We focus mainly on the task of keeping data consistency across all servers and all connected clients with minimal bandwidth, while keeping acceptable perceived latency. Stated differently, we need to send only the relevant information for each client and in a way that fits nicely into 1 to 2 KB/sec.

The final sections describe how the algorithm devised can also benefit other parts of the system (clustering, graphics, etc.) and deal briefly with other problems of MMORTS games.

Basic Techniques

We will start with the most naïve approach to the problem of sending all relevant data to each client. The idea is to generalize the algorithms like [TNL01] used in MMOG with a single avatar for sending relevant data, so that it loops through all the units of each player. This can be best understood by the following pseudocode:

for (each region of map R) do {
   for (each connected player P viewing region R) do
      for (each element E in region R)
         Create message updating element E’s state
}
for (each connected player P) do {
   Compress all messages to player P
   Send messages to player P
}

The compression algorithm can take into account the fact that many consecutive elements "U" near each other have similar characteristics, such as similar unique identifiers (ID), sharing a close position and engaging in the same action (all attacking, all moving, etc.), and achieve a fairly good compression ratio.

The first step for optimizing the algorithm comes from noting that a lot of information is covered by more than one element of the connected player (usually in RTS, players move their units in fairly large groups—and all see more or less the same things). Although it is possible to simply flag each element "U" if it has been sent in order to avoid retransmission, we will take a slightly longer route, which will prove more useful at the end.

World Segmentation

We divide the world into small square regions (see Figure 2.8.1), which must be bigger than the highest line-of-sight (LOS) radius, so that a unit in a square can only see up to the adjacent squares. The region should not exceed the LOS by much so that the adjacent squares will not include excess information that cannot be seen. Note that this bears vague similarity to sections of grid solutions (see [Ferguson01]), but for different purposes.


Figure 2.8.1: World segmentation on the server before and after unit A in D4 crosses to E4. The gray areas are regions marked as viewed for the player who owns units A. Units that belong to other players are marked as b (viewed), x (not viewed).

For each region, we keep track of all the elements in the region and of all the players who should receive information about that region or part of it (players viewing the region). This makes updating the state of the world relatively trivial as is illustrated by the following pseudocode:

for (each region of map R) do {
   for (each connected player P viewing region R) do
      for (each element E in region R)
         Create message updating element E’s state
}
for (each connected player P) do {
   Compress all messages to player P
   Send messages to player P
}

Keeping track of the elements in each region is pretty straightforward, but keeping track of the list of players who view the region is somewhat trickier, as it depends on their elements in adjacent regions as well. To do so, we assign for each viewing player entry in the region a bitfield, which designates from which adjacent squares (or this one) it is viewed. When the first unit of a player enters a region, all the squares adjacent to it are marked as "viewed" by that player (the player is added if he is not yet present and the proper bit is set). When the last unit of that player leaves the square, all bitfields in adjacent squares are updated, and players who do not see a square are deleted (see Figure 2.8.1).

This solution still falls short of the mark, as can be easily seen even by considering the conservative scenario given in the introduction: a player and four allies attack an enemy base guarded by five players; each of them also has 100 units, totaling 1000 units. This scenario already requires each client to be sent updates of about 1000 units approximately twice per second. Even at 6 bytes/unit after amazing compression (a single second of battle can contain updates to several abilities: HP, mana, xp, carried items etc., and each update must also contain the unit’s unique identifier), the updates exceed standard modems and will cost the game provider a fortune. Since MMORTS games contain units that persist over many weeks, it is necessary to provide additional data to support them (more configurable options and abilities), which adds to the amount of data that must be transferred.

Advanced Methods for Bandwidth Reduction

The next step in reducing the communication bandwidth is to note that many of the changes in the state of an element do not necessarily require update messages to be sent, because the updates are predictable. For example, if a moving unit continues moving in the same direction, there is no need to send updates of these movements. Updates will be sent only when the unit changes direction or stops. Many network games use action prediction of the user mainly to cut the response time, not bandwidth. If the user chooses the trivial predicted action, the client will start executing the move at once.

It is possible to make a slightly more elaborate prediction scheme that can also take into account more "active" decisions (assuming one choice is favored over the others). The server runs the prediction algorithm against the actual state, and if the prediction fails for any reason, it sends corrections to the clients about it.

RTS games offer the opportunity for a far more elaborate prediction scheme. Since the units are usually given "complex" commands (build here, attack there, etc.) and the AI is responsible for decomposing it into step-by-step actions, many of the units’ actions are entirely predictable by the same algorithm. Therefore, if we send these complex commands at low rate and perform the same low-level-AI routines on the server and all viewing clients, we could achieve very good bandwidth while keeping the state consistent.

The output of such a prediction algorithm might depend on the unit itself and its surroundings. For example, the movement of a unit A following another unit B depends on the whereabouts of unit B, as well as on the obstacles lying in its immediate path; target selection might depend on all of the local enemy units. Typically, the low-level-AI dependencies are restricted to a certain radius around the unit creating a dependency bubble. For the output of the algorithms to be consistent between the server and all the clients, they have the same commands, and the relevant information inside the dependency bubble must be identical on all of them.

Developers of MMORTS games should also consider the following issues:

The computation power on the server restricts the possible prediction power. Naïvely, one could send only very complex actions, such as the user’s actions,
in large time intervals, and all clients can predict everything from it. However, such actions usually require vast amounts of computation. Using the movement example, the entire path of a unit can be constructed from the true destination. However, even using a fully optimized A* there is no way a server can deal with more than a few thousand units (when millions are needed). Therefore, we should restrict ourselves in practice to actions that are computationally simple. Examples include retaliating to attacks by close creatures, or simple steering (e.g., walking few tiles, following another unit, moving with an offset).

Since the actions of units depend on other units in the dependency bubble, the order of processing is a significant factor. Suppose that B changes its direction in the preceding example. If A is processed first, it will not be aware of the changing direction and will continue in its previous heading. However, if B is processed first, A will change its heading to match. One way to achieve the same processing order on the server and all clients is to process all units according to their unique identifier in ascending order. To avoid ordering advantage/disadvantages, one has to process in ascending order and descending order interleaved turns.

Another related issue is the consistency of the data in the dependency bubble. For the prediction to be accurate, all data in the dependency bubble must be available and current. On a client machine, for units positioned near the end of the viewable region, part of their dependency bubble falls outside this region, so one cannot use the prediction and still be consistent with the server. As mentioned previously, once the prediction fails, the server must send corrections to the client, resulting in the need for continuous updates of units near the end of the viewable region.

The practical question is how to implement a simple system on the server that will know which unit requires continuous updates and which unit can use prediction. One possible solution is to generalize the segmentation to regions by dividing the regions on the server into two types for each player: the CUR (Continuously Updated Region) and the PUR (Prediction Used Region). The PURs of a certain player are those squares that hold his elements, and the CURs are all the remaining viewed squares. Figure 2.8.2 illustrates a scenario similar to that in Figure 2.8.1.


Figure 2.8.2: Advanced world segmentation on the server before and after unit A in D4 crosses to E4. The light gray areas are regions marked as viewed for the player who owns units A. Light gray are CU (Continuously Updated) and dark gray are PUR (prediction used). Units of other players are marked as b (viewed), x (not viewed).

Intuitively, having only those squares that hold the player’s elements as his PURs will only provide mild results, as most of the viewed area still needs to be continuously updated. It is also possible to define the eight adjacent squares as PURs, and only the squares around them as CURs, giving better PUR/CUR ratio. However, in practice, the gain from such a choice is small, as the distribution of elements is far from uniform. Units of different players get close to one another only for interaction (allies move together and trade, enemies fight, etc.), which usually drives them to the same square. In fact, careful choice of square size can ensure that the CURs are usually very empty even in very large battles with thousands of units and many players. In addition, increasing the number of PURS and CURS results in a wider area on which the client must be updated, ultimately resulting in larger bandwidth.

To complete the discussion, the data-updating algorithm now looks something like:

for (each region of map R) do
   for (each connected player P viewing region R) do
      if (Region is Border for P){
         for (each element E in region R) do
            Create message updating E’s state
      }
      else (region is Inner for P){
         for (each element E in region R) do {
            if (Prediction fails due to user change)
               Create message updating E’s state
         }
      }
for (each connected player P) do {
   Compress all messages to player P
   Send messages to player P
}

It should be noted that the algorithm also provides data that lends itself to compression. As mentioned earlier, even without any proper world segmentation, a compression algorithm is quite effective because many consecutive elements U are near each other, have similar unique identifiers (IDs), and probably engage in the same action. The advanced world segmentation allows a far better compression ratio:

• The world segmentation essentially ensures that consecutive units lay close to each other.
• At the complex action level, more units are engaged in the same action than at a step-by-step update (since in RTS, units move in formations that have the same complex action).
• The game elements are processed in an ascending/descending order so that IDs can be even more tightly packed.


Article Start Page 1 of 2 Next

Related Jobs

Turbine Inc.
Turbine Inc. — Needham, Massachusetts, United States
[04.23.14]

Director, Analytics Platform Development
Gameloft Toronto
Gameloft Toronto — Toronto, Ontario, Canada
[04.23.14]

Technical Artist
Muti Labs
Muti Labs — Santa Monica, California, United States
[04.23.14]

Senior Game Engineer
Digital Extremes
Digital Extremes — London, Ontario, Canada
[04.23.14]

Programmers






Comments



none
 
Comment: