Gamasutra: The Art & Business of Making Gamesspacer
A Real-Time Procedural Universe, Part One: Generating Planetary Bodies
View All     RSS
November 23, 2014
arrowPress Releases
November 23, 2014
PR Newswire
View All






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
A Real-Time Procedural Universe, Part One: Generating Planetary Bodies

March 2, 2001 Article Start Page 1 of 2 Next
 

Introduction

There have been quite a few articles, web sites, and books published in the past few years that talk about generating game worlds procedurally. As game worlds have become larger and more complex, it has become less practical for game designers to generate every detail by hand. Mathematical procedures of all sorts have been around for quite a while to generate different types of texture maps, terrain, and 3D objects. If you want an excellent example of procedural modeling and texturing in action, take a look at Terragen. For a good example of procedural textures being generated real-time in a game engine, check out the manual for the animated textures used in Unreal.

If you've been interested in procedural texturing and modeling for even half the amount of time I have, you've probably read many of the same articles and web sites I've read. You may have even bought a book or two and written some code. If you're like me, you've also probably been frustrated by how much time it takes to sift through all the information out there to figure out what really works. Although this article won't solve all your problems or answer all your questions, hopefully it will get you much further along the path than you are now, or at least give you plenty of ideas.

This article is the first part in a series that will explain how to procedurally generate an entire universe and render it real-time at any level of detail in a game engine. Here is what's planned for the first few articles in the series:

  • Part 1: Generating Planetary Bodies. Part 1 will explain how to procedurally generate full-size planetary bodies at any level of detail. It will discuss the pros and cons of different types of procedural algorithms, then explain the method I chose to use in more detail.
  • Part 2: Rendering Planetary Bodies. Part 2 will explain how to render full-size planetary bodies at any level of detail. It uses a spherical version of the ROAM algorithm I came up with, and it is tailored specifically to the procedural algorithm explained in part 1.
  • Part 3: Matters of Scale. Part 3 will explain how to expand the concepts discussed in Parts 1 and 2 to generate a star system filled with planetary bodies, a galaxy of star systems, and ultimately an entire universe of galaxies. Aside from explaining the concepts, it will discuss problems with scale and frame of reference that you run into with really large game worlds.
  • Part 4: Planetary Texture Maps. Part 4 will explain the procedural creation of texture maps. This includes both how the demo generates the current simple texture map it uses for the planet and how that can be improved. For such a large model, creating multiple layers of textures may be the best way to deal with the drastically different levels of detail. This article will also discuss some optimizations for generating procedural textures.

A working C++ demo with source will be provided that generates and renders an Earth-sized planet with an Earth-sized moon revolving around it. The rendering is done using OpenGL, but it can be easily ported to Direct3D or another graphics library of your choice. I am publishing the source under the GNU Lesser General Public License, and it is being used in glElite, an open-source OpenGL version of a popular 1980s game called Elite, which is being written by Timo Suoranta.

Planetary Bodies: Static vs. Dynamic Algorithms

Unfortunately, most procedural routines I've found for generating terrain deal only with flat terrain. Few game designers work with a world large enough to need a sphere. If you see a flat algorithm you really like, you might need to use some creativity to map it to a sphere without causing any distortion. A good example of this can be found in Hugo Elias's article on Spherical Landscapes. The original algorithm generated random "fault lines" on a flat map, randomly raising or lowering the land on either side. The spherical version generates random planes that cut through the sphere, then randomly raises or lowers the land on either side. It's an interesting technique, and the pictures of it look pretty good.

The fault line (or plane) algorithm is an example of what I call a "static" algorithm. A static algorithm is one that generates a static mesh of vertices or height values. The most common form generates a 2D bitmap of height values, then uses the bitmap to build a triangle mesh for a flat landscape. While some static algorithms are very good, they all suffer from two crippling problems. The first is that you need a ridiculously large mesh to get any detail for a full-size planetary body. The second is that the speed of the algorithm depends on the size of the mesh.

To give you an idea of the size we're talking about with planetary bodies, an order-10 sphere based on an icosahedron (20 * 410 triangles) has almost 21 million triangles in it. For a planet the size of Earth, the triangle edges would be 300km long. The edges for an order-20 sphere (22 trillion triangles) would be 0.3km long. It would be ridiculously slow, take up an enormous amount of storage space, and still wouldn't be able to show you details on individual hills or mountains. In that light, it seems that the only feasible way to generate a planet is to come up with something more dynamic.

My definition of a "dynamic" algorithm is an algorithm that takes some sort of vertex coordinates as parameters and returns the proper height value at those coordinates. These coordinates could be longitude and latitude, polar coordinates, 3D coordinates, or anything else you can think of that consistently and uniquely identifies a vertex. Paired with an adapting mesh algorithm such as the one that will be explained in Part 2 of this series, you would not need to store anything in memory or on disk except for the mesh you need to draw for that frame. The ideal algorithm would be initialized with a random seed and a number of defining parameters that would allow you to create a range of interesting planetary bodies from one algorithm.

Unfortunately, coming up with a dynamic algorithm like this which works well is difficult. Coming up with one that's fast is even more difficult. You could, for instance, be creative and make the fault plane algorithm into a dynamic algorithm. Instead of pregenerating a mesh, you could store the planes that cut through the sphere and use them to determine the proper height for any requested vertex. You will run into two problems doing this. The first problem will be that you need a lot of planes to get detail on a full-size planet. What looks good from a distance would leave noticeable lines close-up. That will lead you to the second problem, which is performance. If you need 100,000 fault planes to get good detail, you will have to run each requested vertex through a loop 100,000 times.

One more thing to note before I move on is that it's also possible combine static and dynamic algorithms any way you want. A dynamic algorithm can be seeded with height values created by hand or by a static algorithm. Your requirements will help determine how much of one or the other to use, but keep in mind the size limitations with predefined meshes and static algorithms. If you need to be able to generate an entire planet down to a very detailed level, a dynamic algorithm needs to take over somewhere to fill in the gaps.

My First Attempt: The Plasma Algorithm

If you've ever read anything about procedural terrains, you've probably read about the plasma algorithm. If you've done any coding, you probably started with this algorithm. The basic concept is to start with a single square and subdivide it recursively into smaller squares by adding new vertices to the mesh. The height of each new vertex is offset up or down by a random amount, and the maximum offset allowed is reduced with each level of recursion. This algorithm produces some fairly interesting mountainous terrain, but that's all it gives you. There is no flat land anywhere at all.

Though the plasma algorithm is usually implemented as a static algorithm, my first dynamic planet-generating algorithm was based on it. Instead of passing any kind of spatial coordinates to a truly dynamic function, I gave each triangle a random seed to identify it. Using the same random seed every time I split a specific triangle made it give you the same height value every time for the same vertex, so I used a triangle's seed to generate the seeds for its children. As long as I maintained the seed for each triangle properly, the algorithm produced consistent results. Once I coded it and got it all working, my first dynamic Earth-sized planet came to life on the screen.

I started the camera position close to the ground, and I got very excited as I flew over shaded mountains and valleys. I moved the camera all over the planet to see how it looked. Then I set up a simple 1D texture to get a better feel for the height values I was seeing. Half of it was ocean blue to indicate the water level, and the other half went from yellow to green to white with increasing height; the texture coordinate of each vertex was determined by its height.

My bubble burst as soon as I saw the planet in color. It's hard to tell whether it looked more like thousands of little mountainous islands or thousands of little lakes sandwiched between all the mountains. Needless to say, it looked nothing at all like natural terrain. I tried a number of things to salvage the plasma algorithm. It is a very fast algorithm, and I feared that more complex algorithms would run too slow to be real-time. However, no matter what I tried with the plasma algorithm, my planet still looked terrible.

A True Dynamic Algorithm: Perlin Noise

Next I learned about Perlin noise. Perlin noise is a very popular noise function written by Ken Perlin. To put it simply, it generates smoothed random noise using a lattice of random numbers. The lattice is an n-dimensional array of floats, and the noise function takes an n-dimensional vector as input. Think of the lattice as an n-dimensional space and the input as a point in that space. Because arrays must be accessed using integer indexes and the point in space is defined using floats, the noise value is determined by interpolating between the lattice values located at the integer points around it. The number of integer points it has to interpolate between increases exponentially with each dimension added, so it's a good idea to minimize the number of dimensions you use. The range of noise values returned is between -1.0 and 1.0.


Noise (0.0-2.0)

Noise (0.0-4.0)

Noise (0.0-8.0)

FIGURE 1. Samples of 2D Perlin noise with different ranges of X and Y values.

Depending on the range of the numbers you pass it in each dimension, 2D Perlin noise output can look like anything ranging from white noise on a TV screen to a very smooth random pattern. Because the same coordinates always generate the same output, decreasing the range of numbers you pass it into is like zooming in on part of the noisy image, while increasing the range zooms out. If you look at the images in Figure 1, you can see how increasing the range zooms out (look at the top-left corner of each image). Note how the leftmost image looks like an elevation map.

Though Perlin noise is considered very fast for such a powerful noise function, it is still too slow to use in a game engine for most purposes. As mentioned above, each extra dimension added slows it down exponentially, but in many cases using extra dimensions can be very useful. Adding another spatial dimension to the image on the left in Figure 1 would give you something that resembled volumetric fog. You can also add a time dimension, achieving interesting animation effects by setting one of the coordinates to a particular time value.

Simple Perlin noise is occasionally used to generate bump maps, reflection maps, and textures. However, because of its somewhat smooth and uniform appearance, Perlin noise is more often used as a base for more complex functions. More information on Perlin noise and its uses can be found at http://www.noisemachine.com, which has a set of online course notes written by Ken Perlin.


Article Start Page 1 of 2 Next

Related Jobs

Infinity Ward / Activision
Infinity Ward / Activision — Woodland Hills, California, United States
[11.22.14]

Lead Game Designer - Infinity Ward
Forio
Forio — San Francisco, California, United States
[11.21.14]

Web Application Developer Team Lead
DeNA
DeNA — San Francisco, California, United States
[11.21.14]

Senior Build and Release Engineer
Filament Games LLC
Filament Games LLC — Madison, Wisconsin, United States
[11.21.14]

Game Engineer





Loading Comments

loader image