It's free to join Gamasutra!|Have a question? Want to know who runs this site? Here you go.|Targeting the game development market with your product or service? Get info on advertising here.||For altering your contact information or changing email subscription preferences.
Registered members can log in here.Back to the home page.    

Search articles, jobs, buyers guide, and more.

By Lasse Staff Jensen and Robert Golias
September 26, 2001


Shallow Water Waves



Printer Friendly Version


Letters to the Editor:
Write a letter
View all letters


Deep-Water Animation and Rendering


Most of the visual effects of water are due to reflections and refractions (more detailed description can be found for example in [2] and [16]). When a ray hits the water surface, part of it reflects back to the atmosphere (potentially hitting some object and causing reflective caustics, or hitting the water at other place, or camera), and part of it transmits inside the water volume, scattering (which causes god rays), hitting objects inside the water (causing caustics) or going back into the atmosphere. Thus completely correct lighting would require sophisticated global shading equations and wouldn’t even be close to realtime. We simplify this by only taking first-order rays into account.

Reflection The equation for reflection is well known. For an eye vector E (i.e. the ray from the given point to the eye) and the surface normal N, the reflected ray is:


This ray is then used for lookup in cube-map containing the environment (for ocean typically only the sky).

While the cube-map is ideal for reflecting environment in distance, it’s not very suitable for local reflections (for example boat floating on the water). For this we use a modification of the basic algorithm used for reflections on flat surfaces (described for example in [14]). We set up the view matrix so that it shows the scene, as it would be reflected from a flat plane placed at height zero, and render the whole scene into a texture. Now if we simple used projective textures, we could render the water surface roughly reflecting the scene above it. To improve the effect, we assume that our whole scene is placed on a plane positioned slightly above the water surface. We intersect the reflected ray with this plane and then compute the intersection of ray between this point and the reflected camera. The resulting point is then fed into the projective texture computations.

Note that when rendering to the texture, we set the camera’s FOV (field of view) slightly higher than one do for the normal camera, because the water surface can reflect more of the scene than a flat plane would.

Refraction We will use Snell’s Law to calculate the refracted ray that we need both for the refracted texture lookup and for the caustics calculations. Snell’s Law is simply:

Where T i is the angle of incidence (i.e. angle between the view vector and the surface normal), T r is the refracted angle (i.e. between the reflected ray and negate of normal) and na and nb is the index of refractions for the two materials in question. Setting the index of refraction for air and water equal to 1 and 1.333 respectively we can write Equation 3-1 as:

While this works perfectly in 2D, use of this equation directly in 3D would be too cumbersome. When using vectors, it can be shown that the refracted ray is described by:

Here + sign is used when 0 <·N E . For derivation of this formula, see [15]. With this vector, we are now ready to render the refraction visible on water surface. For the global underwater environment we again use a cube map. For local refractions we use an algorithm very similar to that used for reflections, with only two differences – the scene is rendered into the texture normally, and the plane we’re using for perturbing the texture coordinates is placed below the water surface.

Approximating the Fresnel Term
One of the most important visual aspects of rendering water realistically is due to the Fresnel equation that defines a weight for the blending between the reflection and refraction. Without using the Fresnel term, which defines the amount of reflection according to the incoming light’s angle and the index of refraction of the materials considered, one typical gets a very “plastic look”. From [1] we have:

Here a is the angle between incoming light and the surface normal and na and nb is the coefficients from Snell’s law (Equation 3-2). Since we use an index of 1.333 g only depends on k, so it’s possible to precalculate this and store it in a one-dimensional texture [4]. Another possibility is to approximate Equation 3-3 with a simpler function so we can calculate it directly with the CPU or on the GPU using vertex-/pixel-shaders. In the implementation of [5] they approximate this simply by a linear function that we didn’t find adequate. Instead by experimentations we found out that reciprocal of different powers gives a very good approximation.

In Figure 3-1 we can see the error-plot of a few different powers, and in Figure 3-2 we see our chosen power compared against Equation 3-3.


Color of Water
In chapter 3.1.2 we have described how to render refractions on the water surface. It should however be noted that for deep water, only local refractions should be rendered since one cannot see the sea bottom or any other deeply placed objects (and even the local refractions should be rendered with some kind of fogging). The water itself however has colour that depends on the incident ray direction, the viewing direction and the properties of the water matter itself. To remedy for this effect we take the equations presented in [16], that describes light scattering and absorption in water, and modify them as described shortly. If we don’t take any waves into account (i.e. we treat the water surface as a flat plane) and ignore effects like Godrays, we obtain closed formulas for the watercolour depending only on the viewing angle. This colour is then precalculated for all directions and stored in a cube-map, which is used in exactly the same way as the cube-map for the refracted environment was.

Thanks to that we get darker blue water when looking into depth and brighter greenish colour when looking at the waves, as shown in Figure 3-3.

Using Bump-Mapping to Reduce Geometry
In addition to using traditional Level-Of-Detail (LOD) methods for reducing our dense mesh, we can place the highest frequencies from the FFT directly into a bump-map. With the per pixel bump-mapping capabilities of new hardware, one can render with an extremely coarse grid-size and still maintain a hi image quality as shown in Figure 3-4 with it’s wireframe shown in Figure 3-5.

Figure 3-4. Shallow water rendered with a real-time updated bump-map. Due to the refraction one can see contours of the mountain below.


Figure 3-5. Wireframe of the mesh used to render the image in Figure 3-4. Please note that the crossing lines are due to the degenerated trianglestips.

Caustics are beautiful light sinuous shifting patterns due to sunlight transmitted from the specular water surface. Caustics are a typical indirect lighting effect and are generally very hard to do in realtime. Luckily we can optimise the problem by only considering first order rays (i.e. only one specular-diffuse transmission) and by assuming the receiving diffuse surface is at a constant depth. Now given these, visual acceptable, constraints we use a light beam-tracing scheme described by Watt&Watt [1]. For each specular triangle (i.e. our water surface) we create a light beam by calculating refracted rays for each vertex using Snell’s law (Equation 3-1) with the vertex’s normal (Nv) and the light-vector (i.e. vector from sun to the vertex) (L) as arguments. These light beams are then intersected against the xz-plane (our sea-bottom) at a given constant y- depth. See Figure 3-6 for an illustration of this method. Each of these beams will then diverge or converge on to the plane, so we need to describe their intensity. In [1] they use the following:

Figure 3-4. Please note that the crossing lines Figure 3-6. Four sample triangles for caustics computation.

Where N is the normal of the triangle, L as defined earlier, as is the area of the specular surface (i.e. triangle at the water surface) and ac is the area of the caustic surface (i.e. triangle after intersecting with the xz-plane). Since we know that the entire water surface is refracted as light beams we can simply create one huge degenerated 5 triangle-strip for the caustic mesh, and update the position and intensities of this mesh’ vertices as described.

Unfortunately although the FFT water surface tiles, the resulting caustics pattern does not, because we use only one tile of the surface in the computations. Since calculating the caustics takes considerable time we can’t afford to calculate it for the entire ocean, so we need a way to make it “tileable”. We solve this by blitting parts of the resulting caustic texture nine times, one for each directions, from a large caustic texture. Each part is added to the middle “cut out” which we use as the final caustics texture. This process is illustrated in Figure 3-7 with the result shown in Figure 3-8. A nice side effect of this process is that we can use the multi- texturing capabilities of today’s hardware to do Anti-Aliasing at the same time. We simply set up four passes of the same texture and perturblate the coordinates of each pass slightly to simulate the effect of a 2x2 super-sampling. This is in our opinion needed, since the caustics patterns has a lot of details that quickly aliases if the specular surface isn’t dense enough to represent the pattern properly. On the other hand we could of course use the other passes to reduce the number of blits.

Figure 3-7. The left part of 1024x1024 caustics texture is added to the right half of the inner 256x256 part of the image. A similar process is done for the eight other pieces.

Since the caustics patterns changes rapidly with depth, as seen in Figure 3-9, we use the camera’s bounding box and previous depth to decide an average depth to use.

For applying this texture to objects underwater, we need a way to calculate the texture-coordinates into the caustics texture. Given the sun’s ray direction and the position of a triangle, we compute it’s texture’s UV coordinates by projecting the texture from the height of the water in the direction of the ray (note that because this works as a parallel projection, we don’t even have to use projective textures here). In addition we compute the dot product between the surface’s normal and the inverted ray direction to obtain the intensity of the applied texture (we use this as alpha then). The same algorithm can be used to create reflective caustics on objects above water.




join | contact us | advertise | write | my profile
news | features | companies | jobs | resumes | education | product guide | projects | store

Copyright © 2002 CMP Media LLC. All rights reserved.
privacy policy | terms of service