The Terrain Textures
Other than the height map, we use three different types of textures for the rendering of the terrain in Just Cause 2:
- Normal map (A8L8 format) - The normals of the terrain.
- Material indices (ARGB4444 format) - The material type index.
- Material weights (ARGB4444 format) - The weight of each material type.
The normal map texture is obviously used for shading -- nothing fancy there. The benefit of having the normals stored in a texture as opposed to in the vertices is that lighting becomes invariant of mesh resolution, which helps a lot when it comes to hide LODing. The material textures are used in the pixel shader to select and blend between various high-res material texture maps. The resolution of each of the textures is four meters/texel. This is quite a low texture resolution for terrain textures, but we use tiled detail textures that modulate these to achieve a higher final resolution. This is an example of where we use procedural techniques to improve the fidelity of the data.
I won't go into detail on the pixel shader in this article, but I can say that it is quite expensive and we spent a lot of time optimizing it. In fact, the frame rate is sometimes higher when there are a lot of trees on screen because they occlude the terrain, so we avoid the rendering of expensive terrain pixels. It's an interesting situation when drawing more actually improves performance!
The Height Map and Material Map
The source data resolution of the height map and material map in JustEdit is 4 meters per sample. The heights are stored as 16 bit values, and the material indices as 8 bits. In the PC version of Just Cause 2 we keep this format, but for Xbox 360 and the PlayStation 3 we compress the maps to 16 bits per sample in total. The compression a "lossy" scheme which takes 4 by 4 blocks of height and material samples and makes a number of simplifications of the data. The materials are simplified similarly to how the DXT texture format works, by using a table of four entries and 2 bit values for each sample used as lookup index into the table.
The height values are converted to a 3:6 floating-point format, where the 3 bit exponent is shared for 2 by 2 samples and each sample has a 6 bit mantissa. Using a floating-point format allowed us to have high precision in low frequency areas where it's typically the most needed. Packed into the block is also bounding data to optimize ray-casts. When sampling the data in run-time, a Catmull-Rom spline interpolation is performed on the unpacked samples, and high-resolution displacement maps controlled by the material map are added to the result. This way, high-fidelity height values are achieved with relatively little amount of data.
The height map can be sampled thousands of times per frame by the physics system, so it was crucial to achieve high performance on the sampling function. Therefore we use hand-tuned SIMD functions to unpack and sample the data. The VMX instruction set on Xbox 360 and PlayStation 3 is very powerful, and much of the magic was possible thanks to the versatile permute instruction. This instruction sadly doesn't exist on x86 architectures, which was one of the reasons why we were forced to have a different data representation on the PC.
The Terrain Mesh
So we've talked about stream patches, and how they have a fixed size of 512 by 512 meters. One can view stream patches as mere data containers that are streamed in to provide data upon request for other systems. Now, the terrain meshes are represented as a type of patches too. We call these terrain patches, and they basically contain a run-time generated vertex and index buffer representing the terrain in that area.
Terrain patches are organized as a patch system, i.e. a series of patch maps as described above. This is because there are several level-of-detail representations of the terrain mesh. There are twelve levels of terrain mesh detail in Just Cause 2. Each level consists of a patch map of 8 by 8 terrain patches, centered on the camera. The smallest level of the terrain patch maps covers an area of in total 64 by 64 meters, and at the largest level they cover an area of in total 256 by 256 kilometers. Note that this is much larger than the size of the game world; we have procedurally generated data outside of the game world to ensure that the visible distance is equal in a directions regardless of where you are on the map.
The concept of a level-of-detail pyramid of patches centered on the camera is similar to Geometry Clipmaps described by Hugues Hoppe and Frank Losasso, but we actually developed our system before that paper was published in 2004.
As the camera moves, new rows and columns of new terrain patches get constructed and old ones destroyed. Their data is either gathered from the stream patches, or from a global low-fidelity data representation, depending of the LOD level of the terrain patch.
|
/Linus
Twitter: @BlombergLinus
Indeed, the game that inspired me at this time was Paul Woakes' Mercenary a year later in which you crash land on a planet and then have a number of adventures within that city as you seek a way to escape. This type of open world gameplay was new to me back then and may well have invented the genre - certainly, more so than Grand Theft Auto. The later sequel Damocles had you flying from planet to planet within a star system, giving a sense of enormous empowerment. This predates Frontier: Elite II by several years, which struck me as being too big, dry and purposeless. Flavien Brebion's Infinity (The Quest for Earth) is technically impressive - but ultimately as cold and detached as Space Engine or the Universe Sandbox. At times EVE Online veers too close to being as boring as a spreadsheet with a fancy sit-back-and-watch screensaver attached. The launch of Dust 514 for the PS3 just highlights the lack of cohesion in the game - you really ought to be able to land on a planet in your own ship rather than delegate the fun to some mercs and pay for the priviledge in the process. For a long time space games were out of vogue and it is interesting to see so many get funded through recent Kickstarter appeals, including Elite: Dangerous - which, in later versions, promises to let you land on planets.
However, as someone keen on this genre and finding nothing in the market since the release of Damocles, my yearnings for an adventure game set as much on the surfaces of astral bodies and inside spacecraft as much as between the stars drove me to work on the tools I thought I would need if I were to attempt to write such a big game by myself. Suffice to say I have put decades of research into boosting my productivity. C++ horrifies me: such an awful mish-mash of poorly concieved features wrapped in an error-prone syntax. Substantial use will need to be made of both kitbashing (assembling complex models from phasing the geometry of "prefabs") and the procedural generation of those prefabs. Miguel Cepero's work Procedural World looks a lot better than I need my game 'Universe' to look. Indeed, I would happily accept the rudimentary charm of Damocles if that was all I could manage, after all there are a lot of other aspects to the game that I think are more important than its presentation... I don't like this modern trend of squandering computational resources on special effects that have no impact on gameplay, whilst physics and AI get short-shrift again.
So, finally I wend my way around to my question. How would I apply a terrain solution like yours that depends on square tiles to a spherical planet? I don't think Latitude and Longitude will work as that will stretch stuff too much and result in awkward triangles at the poles. An isocahedron-sphere is nice, but relies on triangles. Is a (bloated) spherified cube the correct approach? What happens to the horizon? Will it appear to curve with more altitude? Should I trust my instincts and just use polar coordinates and a heightmap centred on the core of the planet? Any advice or pointers to research papers would be appreciated. I'd like to read up on this subject now whilst I'm implementing my programming language.