Most current implementations of fog in games use layered alpha images. This technique, however, does not bare significantly resemblance to how fog actually composites in real life, since density of the fog from the viewer is not modeled in any way.
A Simple Model
of Fog
So, the proposed model of fog is (done for each color channel):
Intensity of Pixel = (1L_{s}*A_{s})*(I_{r}) + L_{e}*A_{e}*I_{f}.Ls = Amount of Light absorbed by fog.
L_{e} = Amount of Light emitted by fog.
A_{s} = Area of fog absorbing light
A_{e} = Area of fog emitting light
I_{r} = Intensity of the light coming from the scene.
I_{f} = Intensity of the light coming from the fog.
Since the area of fog emitting light is the same for the area of fog absorbing light, and the assumption is made that the amount of light emitted is the same percentage as absorbed, then this equal simplifies to:
Intensity of Pixel = (1 A*L)*(I_{r}) + L*A*I_{f}.L = Amount of light absorbed/emitted by fog (fog density)
A = Area of fog.
I_{r} = Intensity of the light coming from the scene.
I_{f} = Intensity of the light coming from the fog.
If this is a per pixel operation, then the incoming light is already computed by rendering the scene as it would normally appear. An analytical way of thinking of this problem is: The amount a pixel changes to the fog color is proportional to the amount of fog between the camera and the pixel. This, of course, is the same model that is used in distance fog. Thus, the problem is reduced to determining the amount of fog between the camera and the pixel being rendered.
Determining Fog Depth at each pixel location
Standard Depth fog uses the Z (or w) value as the density of fog. This
works well, but limits the model to omnipresent fog. That is, the camera
is always in fog, and there is (Save a definable sphere around the camera)
an even amount of fog at all points in the scene.
Of course, this does not work well (or at all) for such effects as a ground
fog, and this technique cannot be used for interesting volumetric lighting.
An alternative way to create fog is to model a polygonal hull that represents
the fog, and to compute the area of fog for each pixel rendered on the
scene. At first glance, this seams impossibly complex. Computing volume
area typically involves complex integration.
However, the shaft of fog along a ray can be closely approximated subtracting
the w depth a ray enters a fog volume from the w depth of the point it
leaves the volume, and multiplying by some constant. (Mathematically,
this is a simple application of a form of Stoke's theorem, where all but
2 of the terms cancel since the flux is constant in the interior).



Diagram 1: The amount of fog along a pixel as the difference between the point a ray enters the volume and exits. 