Deep-Water Animation and Rendering
Reflection The equation for reflection is well known. For an eye vector E (i.e. the ray from the given point to the eye) and the surface normal N, the reflected ray is:
This ray is then used for lookup in cube-map containing the environment (for ocean typically only the sky).
While the cube-map is ideal for reflecting environment in distance, it’s not very suitable for local reflections (for example boat floating on the water). For this we use a modification of the basic algorithm used for reflections on flat surfaces (described for example in ). We set up the view matrix so that it shows the scene, as it would be reflected from a flat plane placed at height zero, and render the whole scene into a texture. Now if we simple used projective textures, we could render the water surface roughly reflecting the scene above it. To improve the effect, we assume that our whole scene is placed on a plane positioned slightly above the water surface. We intersect the reflected ray with this plane and then compute the intersection of ray between this point and the reflected camera. The resulting point is then fed into the projective texture computations.
Note that when rendering to the texture, we set the camera’s FOV (field of view) slightly higher than one do for the normal camera, because the water surface can reflect more of the scene than a flat plane would.
Refraction We will use Snell’s Law to calculate the refracted ray that we need both for the refracted texture lookup and for the caustics calculations. Snell’s Law is simply:
Where T i is the angle of incidence (i.e. angle between the view vector and the surface normal), T r is the refracted angle (i.e. between the reflected ray and negate of normal) and na and nb is the index of refractions for the two materials in question. Setting the index of refraction for air and water equal to 1 and 1.333 respectively we can write Equation 3-1 as:
While this works perfectly in 2D, use of this equation directly in 3D would be too cumbersome. When using vectors, it can be shown that the refracted ray is described by:
Here + sign is used when 0 <·N E . For derivation of this formula, see . With this vector, we are now ready to render the refraction visible on water surface. For the global underwater environment we again use a cube map. For local refractions we use an algorithm very similar to that used for reflections, with only two differences – the scene is rendered into the texture normally, and the plane we’re using for perturbing the texture coordinates is placed below the water surface.
Approximating the Fresnel Term
Here a is the angle between incoming light and the surface normal and na and nb is the coefficients from Snell’s law (Equation 3-2). Since we use an index of 1.333 g only depends on k, so it’s possible to precalculate this and store it in a one-dimensional texture . Another possibility is to approximate Equation 3-3 with a simpler function so we can calculate it directly with the CPU or on the GPU using vertex-/pixel-shaders. In the implementation of  they approximate this simply by a linear function that we didn’t find adequate. Instead by experimentations we found out that reciprocal of different powers gives a very good approximation.
In Figure 3-1 we can see the error-plot of a few different powers, and in Figure 3-2 we see our chosen power compared against Equation 3-3.
Thanks to that we get darker blue water when looking into depth and brighter greenish colour when looking at the waves, as shown in Figure 3-3.
Bump-Mapping to Reduce Geometry
Where N is the normal of the triangle, L as defined earlier, as is the area of the specular surface (i.e. triangle at the water surface) and ac is the area of the caustic surface (i.e. triangle after intersecting with the xz-plane). Since we know that the entire water surface is refracted as light beams we can simply create one huge degenerated 5 triangle-strip for the caustic mesh, and update the position and intensities of this mesh’ vertices as described.
Unfortunately although the FFT water surface tiles, the resulting caustics pattern does not, because we use only one tile of the surface in the computations. Since calculating the caustics takes considerable time we can’t afford to calculate it for the entire ocean, so we need a way to make it “tileable”. We solve this by blitting parts of the resulting caustic texture nine times, one for each directions, from a large caustic texture. Each part is added to the middle “cut out” which we use as the final caustics texture. This process is illustrated in Figure 3-7 with the result shown in Figure 3-8. A nice side effect of this process is that we can use the multi- texturing capabilities of today’s hardware to do Anti-Aliasing at the same time. We simply set up four passes of the same texture and perturblate the coordinates of each pass slightly to simulate the effect of a 2x2 super-sampling. This is in our opinion needed, since the caustics patterns has a lot of details that quickly aliases if the specular surface isn’t dense enough to represent the pattern properly. On the other hand we could of course use the other passes to reduce the number of blits.
Since the caustics patterns changes rapidly with depth, as seen in Figure 3-9, we use the camera’s bounding box and previous depth to decide an average depth to use.
For applying this texture to objects underwater, we need a way to calculate the texture-coordinates into the caustics texture. Given the sun’s ray direction and the position of a triangle, we compute it’s texture’s UV coordinates by projecting the texture from the height of the water in the direction of the ray (note that because this works as a parallel projection, we don’t even have to use projective textures here). In addition we compute the dot product between the surface’s normal and the inverted ray direction to obtain the intensity of the applied texture (we use this as alpha then). The same algorithm can be used to create reflective caustics on objects above water.