Volumetric Rendering in Realtime
October 3, 2001 Page 1 of 4
Most current implementations of fog in games use layered alpha images. This technique, however, does not bare significantly resemblance to how fog actually composites in real life, since density of the fog from the viewer is not modeled in any way.
A Simple Model
of Fog
The first and most important assumption, common to many real time fog implementations, is that incoming light at each particle is constant. That is, one particle of fog located at one end of a fog volume and another particle are receiving the same amount of incoming light.
The next related assumption is that each particle of fog emits the same amount of light, and in all directions. This, of course, implies that a fog's density remains fixed. These two assumptions mean that, given a spherical volume of fog, equal light is emitted in all directions.
Using these assumptions, a model of fog can be defined. If a ray is cast back from a pinhole camera through the scene  the amount of fog that contributes to the color of that ray is the sum of all the light along the ray's path. In other words, the amount of contributing light is equal to the area of fog between the camera and the point in the screen. The light of the incoming ray, however, was partially absorbed by the fog itself, thus reducing its intensity.
So, the proposed model of fog is (done for each color channel):
Intensity of Pixel = (1L_{s}*A_{s})*(I_{r}) + L_{e}*A_{e}*I_{f}.Ls = Amount of Light absorbed by fog.
L_{e} = Amount of Light emitted by fog.
A_{s} = Area of fog absorbing light
A_{e} = Area of fog emitting light
I_{r} = Intensity of the light coming from the scene.
I_{f} = Intensity of the light coming from the fog.
Since the area of fog emitting light is the same for the area of fog absorbing light, and the assumption is made that the amount of light emitted is the same percentage as absorbed, then this equal simplifies to:
Intensity of Pixel = (1 A*L)*(I_{r}) + L*A*I_{f}.L = Amount of light absorbed/emitted by fog (fog density)
A = Area of fog.
I_{r} = Intensity of the light coming from the scene.
I_{f} = Intensity of the light coming from the fog.
If this is a per pixel operation, then the incoming light is already computed by rendering the scene as it would normally appear. An analytical way of thinking of this problem is: The amount a pixel changes to the fog color is proportional to the amount of fog between the camera and the pixel. This, of course, is the same model that is used in distance fog. Thus, the problem is reduced to determining the amount of fog between the camera and the pixel being rendered.
Determining Fog Depth at each pixel location
Standard Depth fog uses the Z (or w) value as the density of fog. This
works well, but limits the model to omnipresent fog. That is, the camera
is always in fog, and there is (Save a definable sphere around the camera)
an even amount of fog at all points in the scene.
Of course, this does not work well (or at all) for such effects as a ground
fog, and this technique cannot be used for interesting volumetric lighting.
An alternative way to create fog is to model a polygonal hull that represents
the fog, and to compute the area of fog for each pixel rendered on the
scene. At first glance, this seams impossibly complex. Computing volume
area typically involves complex integration.
However, the shaft of fog along a ray can be closely approximated subtracting
the w depth a ray enters a fog volume from the w depth of the point it
leaves the volume, and multiplying by some constant. (Mathematically,
this is a simple application of a form of Stoke's theorem, where all but
2 of the terms cancel since the flux is constant in the interior).



Diagram 1: The amount of fog along a pixel as the difference between the point a ray enters the volume and exits. 
Page 1 of 4