Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Volumetric Rendering in Realtime
Press Releases
September 20, 2019
Games Press

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

# Volumetric Rendering in Realtime

October 3, 2001 Page 2 of 4

A Simple Case

The first case to consider is a way of rendering a convex volumetric fog that has no object in it, including the camera. The algorithm can easily be expanded to handle objects (or parts of the scene) inside the fog, the camera inside the fog, and concave volumes.

Computing this term on a per pixel basis involves several passes. Clearly, for any view there are two distances of concern: the point the ray enters the fog, and the point the ray exists.
Finding the point a ray enters a fog volume is computed by rendering the fog volume and reading the w value. Finding the ray on the other side of the fog volume is also not difficult. Polygons not facing the camera are culled away - but since any surface not facing the camera would be the backside of the fog volume, reversing the culling order and drawing the fog again, renders the inside of the fog volume. With convex volumes - there will never be case where the ray will pass in and out of a fog volume twice.

To get the total amount of fog in the scene, the buffer containing the front side w values of the fog volume is subtracted from the buffer containing the back side w values of the fog. But, the first question is, how can w pixels operations be performed? And then, how can this value be used for anything? Using a vertex shader, the w is encoded into the alpha channel, thereby loading the w depth of every pixel into the alpha channel of the render target. After the subtraction, the remaining w value represents the amount of fog at that pixel.

 Front side, back side, and the difference (with contrast and brightness increased)

So the algorithm for this simple case is:

• Render the backside of the fog volume into an off-screen buffer, encoding each pixels w depth as its alpha value.
• Render the front side of the fog volume with a similar encoding, subtracting this new alpha from the alpha currently in the off-screen buffer.
• Use the alpha values in this buffer to blend on a fog mask.

Adding another variable: Objects in the fog
Rendering fog with no objects inside it is not that interesting, so the above algorithm needs to be expanded to allow objects to pass in and out of the fog. This task turns out to be rather straightforward. If the above fog algorithm was applied without taking into consideration that objects are in the middle, the fog would be incorrect.

 Incorrectly blended fog on the left, correct fog on the right.

The reason why this is not correct is obvious; the actual volume of fog between it and the camera has been computed incorrectly. Because there is an object inside the fog - the backside of the fog is no longer the polygonal hull that was modeled, but the front side of the object. The distances of fog needs to be computed using the front side of the object as the back end of the fog.

This is accomplished by rendering the scene (defined as objects in the fog) using the W trick. If a pixel of an object lies in front of the fog's back end - it replaces the fogs backend with its own, thereby becoming the virtual back part of the fog.

The algorithm changes to:

• Clear the buffer(s).
• Render the scene (or rather, any object which might be in the fog) into an off-screen buffer, encoding each pixel's w depth as its alpha value. Z buffering needs to be enabled.
• Render the back-side of the polygonal hull into the same buffer, keeping Z buffering enabled. Thus, if a pixel in the object is in front of the back-side of the fog, it will be used as the back end of the fog instead.
• Render the front side of the fog, subtracting this new w alpha from the alpha currently in the off-screen buffer
• Use the alpha values in this buffer to blend on a fog mask.

Unfortunately, the above approach has one drawback. If an object is partially obscured by fog, then the component that is not in the fog will be rendered into the back buffer, effectively becoming the backside of the fog. Thus, then the distance from these pixels to the camera would be counted as the distance of fog - even though there is none.

Although this could be corrected by using the stencil buffer, another approach is to redraw (or frame copy) the screen in the front side of pass - thereby using the scene as the fog front as well as the back. This causes objects partially obscured by fog to render correctly - those parts not in fog result in a 0 fog depth value. This new approach looks like:

• Clear the buffer(s)
• Render the scene into an off-screen buffer A, encoding each pixel's w depth as its alpha value- Z Buffering enabled.
• Render the backside of the fog into off-screen buffer A, encoding each pixel's w depth.
• Render the scene into an off-screen buffer B (or copy it from buffer A before step 3 takes place), using the w depth alpha encoding.
• Render the front side of the fog volume into off-screen buffer B with w alpha encoding. Since the fog volume should be in front of parts of the object that are obscured by fog, it will replace them at those pixels.
• Subtract the two buffers in screen space using the alpha value to blend on a fog mask.

Page 2 of 4

### Related Jobs

HB Studios — Lunenburg/Halifax, Nova Scotia, Canada
[09.20.19]

Experienced Software Engineer
University of Exeter — Exeter, England, United Kingdom
[09.20.19]

Serious Games Developer
innogames — Hamburg, Germany
[09.20.19]

PHP Developer - Tribal Wars
innogames — Hamburg, Germany
[09.20.19]

Java Software Developer - New Mobile Game