Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Gamasutra: The Art & Business of Making Gamesspacer
Volumetric Rendering in Realtime
arrowPress Releases
June 20, 2019
Games Press
View All     RSS
If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Volumetric Rendering in Realtime

October 3, 2001 Article Start Previous Page 3 of 4 Next

Camera in the Fog

There is now one more neat trick to perform - allowing the camera to enter the fog. Actually, the fog clipping plane and the geometry clipping plane are aligned - then the trivial case will already work At some point - parts of the fog volume will be culled against the near clipping plane. Since the front plane is by default cleared with 0s (indicating that those pixels are 0 depth from the camera) than when the clipping of the front volume begins to occur - the pixel's being rendered on those polygons would have been 0 anyway.

There is one more problem that crops up. To accommodate an object moving through the fog - two steps were added, one of which acted as the front side of the fog. But if the camera is inside the fog volume, then a key assumption has been broken. Not all of the fog volume is actually rendered since part of the fog volume is clipped away. This means that Step 4 in the above algorithm now becomes a major problem - as it becomes the effective front side of the fog. The polygons of the fog volume can no longer replace those pixels set by the scene since the fog volume polygons have been (at least partially) culled away.

The solution to this is simple. Step 4 was added specifically to allow objects that were only partially obscured by fog to render correctly, since any pixel rendered in step 4 would be replaced by step 5 if it were in the fog. Obviously, if the camera is inside the fog - then all parts of an object are partially obscured by fog. Thus, step 4 should be disabled completely. The following is a complete and general implementation of the rendering of uniform density, convex fog hulls.

  • Clear the buffer(s)
  • Render the scene into an off-screen buffer A, encoding each pixel's w depth as its alpha value- Z Buffering enabled.
  • Render the backside of the fog volume into off-screen buffer A, encoding each pixel's w depth.
  • If the camera is not inside fog, render the scene into an off-screen buffer B (or copy it from buffer A before step 2 takes place), using the w depth alpha encoding. Otherwise, skip this step.
  • Render the front side of the fog volume into off-screen buffer B with w alpha encoding. If step 4 was executed, the fog volume should be in front of parts of the scene that are obscured by fog, it will replace them at those pixels. If step 4 was not executed, then some of these polygons were culled away.
  • Subtract the two buffers in screen space using the alpha value to blend on a fog mask.

  • Further Optimizations and Improvements

    Cleary, this is a simple foundation for fog - there are numerous improvements and enhancements that can be made. Perhaps highest on the list is a precision issue. Most hardware allows only 8 bit alpha formats. Because so much is dependent on the w depth - 8 bits can be a real constraint. Imagine a typical application of a volumetric fog - a large sheet of fog along the ground. No matter what function used to take the depth and render it into fog - there remains a virtual far and near clipping plane for the fog. Expanding these planes means either less dense, or less precise fog, while keeping them contracted means adjusting the fog clipping planes for each fog volume rendered.

    On new and upcoming hardware, however, there is a trick with the pixel shaders. Why not keep some more bits of precision in one of the color channels, and use the pixel shader to perform a carry operation? At first glance it appears that 16 bit math easily be accomplished on parts designed to operate at only 8. However, there is one nasty limiting factor - on a triangle basis - the color interpolators work at only 8 bits. Texture coordinates, on the other hand, typically operate at much higher precision, usually at least 16 bits. Although texture coordinates can be loaded into color registers, the lower bits of precision are lost . An alternative is to create a 1D step function filled texture, with each texel representing a higher precision value embedded in the alpha and color channels. Unfortunately, the precision here is usually limited to the size of a texture.

    Once the issue of higher precision is addressed, it is possible to render concave volumes even with limited 8-bit hardware. This must be accomplished by either rendering concave fog volumes as a collection on its convex parts, or by summing the multiple entry points of fog and subtracting away the multiple exit points. Unfortunatly, the high precision trick will not work for the latter approach since there is no way to both read and write the render target in the pixel shader. Although a system of swapping between multiple buffers carefully segmented to avoid overlap might work, this latter approach will probally not be feasible until hardware allows rendering into 16 bit formats (i.e. a 16 bit alpha format).

    Finally, there are many artistic enhancements that can be made on this kind of volumetric effect. To make volumetric light, for instance, the alpha blends modes can be changed to additive rather then blend, thereby adding light to the scene. Decay constants can also be modeled in this way, to accomplish some surface variations of fog density.

    Additionally, fog volumes can be fitted with textures on top of them that operate much like bump maps do - varying the height of the fog at that point without changing the actual geometry. To create an animated set of ripples in fog, for instance, one can take a ripple depth texture and move it along the surface of the fog volumes, and adding it to the w depth. Other texture tricks are possible as well - noise environment maps can be coupled to fog volumes to allow primitive dust effects.

    And of course, it can be quite fun to draw the fog mask without actually drawing the object - creating an invisible object moving through the scene.

    Article Start Previous Page 3 of 4 Next

    Related Jobs

    Harmonix Music Systems
    Harmonix Music Systems — Boston, Massachusetts, United States

    Senior Server Engineer
    Backflip Studios
    Backflip Studios — Boulder, Colorado, United States

    Senior Cloud Software Engineer
    Gear Inc.
    Gear Inc. — Hanoi, Vietnam

    Mobile Games Tools Producer
    Genies Inc
    Genies Inc — Venice, California, United States

    *Principal Engineer - Graphics Programming & Rendering Engine*

    Loading Comments

    loader image