[In this in-depth technical article, originally published in Game Developer magazine, Neversoft co-founder West examines how procedural generated content and compression can lead to expanding vistas for your open-world games.]
In a game where the environment and game objects are spooled from the disc as the player moves through the game world, the limiting factor in the allowable scene complexity is often a function of the data transfer rate of spooling and the virtual speed of the player within the world.
If the world has too much variety, then as the player moves from one region to another, a large amount of new data may need to be spooled from the disc in order to correctly display all the elements in the new region. If the data cannot be spooled fast enough, visible glitches may result as new geometry pops into existence.
Anyone playing the Grand Theft Auto series on the PlayStation 2 will have had the occasional experience of rapidly turning a corner and finding a large section of the road invisible for a few seconds.
If the missing elements are logically necessary for the game to work, the player may be forced to wait out these stalls in gameplay, as the missing elements are loaded.
To prevent these problems, developers should be placing limits on the scene complexity and the allowable variation between game regions.
Limits should also be placed on the maximum speed at which the player can move through the world, keeping it slow enough so there is sufficient time for the world to be updated as the player moves through it at top speed.
Disc bandwidth is frequently used as a shared resource, with the environment spooling simultaneously with the background music, voice over, and sometimes video.
So, in addition to increasing allowable scene complexity, any improvement in the utilization of disc bandwidth will allow a richer game experience with these additional audio and video elements.
To maximize disc bandwidth utilization, the data on the disc needs to be compressed as much as possible. The greater the ratio between the size of the disc data and the system memory data, the more disc bandwidth we're able to use.
Naive lossless compression generally gives us an approximately 50 percent reduction in the size of the data. While there is frequent talk of more powerful processors (and particularly multi-core processors) that would let us use more powerful compression algorithms, the fact is that we are not talking about orders of magnitude in improvement.
On arbitrary data, advanced algorithms (such as PAQ) don't perform much more than 10 to 20 percent better than simple algorithms (such as Lempel-Ziv), despite taking more than twice as much CPU time in the decompression stage and several orders of magnitude more time in the compression stage (which can cause serious production problems by increasing build time).
It's possible to achieve more significant improvements by tailoring specific compression strategies to the data being compressed. This could involve re-ordering the data by de-interleaving data channels to allow the compressor to take better advantage of repetition within a channel (such as the X, Y, and Z channels of a vertex list).