Gamasutra: The Art & Business of Making Gamesspacer
Volumetric Rendering in Realtime
View All     RSS
October 20, 2017
arrowPress Releases
October 20, 2017
Games Press
View All     RSS






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Volumetric Rendering in Realtime


October 3, 2001 Article Start Page 1 of 4 Next
 

Most current implementations of fog in games use layered alpha images. This technique, however, does not bare significantly resemblance to how fog actually composites in real life, since density of the fog from the viewer is not modeled in any way.

A Simple Model of Fog

In order to create fog effects in a game, it is first necessary to create an analytical model that bears some resemblance to the mechanics of real fog. Fog is a cloud of water vapor consisting of millions of tiny particles floating in space. Incoming light is scattered and emitted back into the scene. This model is too complex to render in real time, and so a few assumptions and restrictions must be made. The following is a similar model to what is used in depth fog.

The first and most important assumption, common to many real time fog implementations, is that incoming light at each particle is constant. That is, one particle of fog located at one end of a fog volume and another particle are receiving the same amount of incoming light.
The next related assumption is that each particle of fog emits the same amount of light, and in all directions. This, of course, implies that a fog's density remains fixed. These two assumptions mean that, given a spherical volume of fog, equal light is emitted in all directions.

Using these assumptions, a model of fog can be defined. If a ray is cast back from a pinhole camera through the scene - the amount of fog that contributes to the color of that ray is the sum of all the light along the ray's path. In other words, the amount of contributing light is equal to the area of fog between the camera and the point in the screen. The light of the incoming ray, however, was partially absorbed by the fog itself, thus reducing its intensity.

So, the proposed model of fog is (done for each color channel):

 

Intensity of Pixel = (1-Ls*As)*(Ir) + Le*Ae*If.

Ls = Amount of Light absorbed by fog.
Le = Amount of Light emitted by fog.
As = Area of fog absorbing light
Ae = Area of fog emitting light
Ir = Intensity of the light coming from the scene.
If = Intensity of the light coming from the fog.

Since the area of fog emitting light is the same for the area of fog absorbing light, and the assumption is made that the amount of light emitted is the same percentage as absorbed, then this equal simplifies to:

 

Intensity of Pixel = (1- A*L)*(Ir) + L*A*If.

L = Amount of light absorbed/emitted by fog (fog density)
A = Area of fog.
Ir = Intensity of the light coming from the scene.
If = Intensity of the light coming from the fog.

If this is a per pixel operation, then the incoming light is already computed by rendering the scene as it would normally appear. An analytical way of thinking of this problem is: The amount a pixel changes to the fog color is proportional to the amount of fog between the camera and the pixel. This, of course, is the same model that is used in distance fog. Thus, the problem is reduced to determining the amount of fog between the camera and the pixel being rendered.


Determining Fog Depth at each pixel location

Standard Depth fog uses the Z (or w) value as the density of fog. This works well, but limits the model to omnipresent fog. That is, the camera is always in fog, and there is (Save a definable sphere around the camera) an even amount of fog at all points in the scene.
Of course, this does not work well (or at all) for such effects as a ground fog, and this technique cannot be used for interesting volumetric lighting.

An alternative way to create fog is to model a polygonal hull that represents the fog, and to compute the area of fog for each pixel rendered on the scene. At first glance, this seams impossibly complex. Computing volume area typically involves complex integration.

However, the shaft of fog along a ray can be closely approximated subtracting the w depth a ray enters a fog volume from the w depth of the point it leaves the volume, and multiplying by some constant. (Mathematically, this is a simple application of a form of Stoke's theorem, where all but 2 of the terms cancel since the flux is constant in the interior).


Diagram 1: The amount of fog along a pixel as the difference between the point a ray enters the volume and exits.



Article Start Page 1 of 4 Next

Related Jobs

Infinity Ward / Activision
Infinity Ward / Activision — Woodland Hills, California, United States
[10.20.17]

Senior AI Engineer
Wargaming Sydney
Wargaming Sydney — Sydney, New South Wales, Australia
[10.19.17]

Senior Software Engineer – C++, Game Tech Development, Render
Insomniac Games
Insomniac Games — Burbank, California, United States
[10.19.17]

Engine Programmer
Phosphor Games Studio
Phosphor Games Studio — Chicago, Illinois, United States
[10.19.17]

Mid to Senior Gameplay Programmer





Loading Comments

loader image