Deep-Water Animation and Rendering
In this paper we introduces a new realtime level-of-detail deep-water animation scheme, which uses many different proven water models. In addition we show how to utilities today’s latest graphic hardware for realistic rendering of oceans.
In this statistical model of sea, wave height is a random variable of horizontal position and time, h(X,t). It decomposes the wave heightfield into a set of sinus waves with different amplitudes and phases. While the model itself provides us with a tool to generate these amplitudes and phases, we use inverse Fast Fourier Transformation (FFT) as a mean to quickly evaluate the sum.
FFT is a fast version of discrete Fourier transformation, i.e. Fourier transformation that samples the input at regularly placed points. Description of both regular FT and FFT together with it’s interesting properties and working algorithms can be found in , which is also available online.
FFT allows us to quickly evaluate the following sum:
Here X is a horizontal position of a point whose height we are evaluating. The wave vector K is a vector pointing in the direction of travel of the given wave, with a magnitude k dependent on the length of the wave (l):
k =2p /l
And the value of h (K,t) is a complex number representing both amplitude and phase of wave K at time t. Because we are using discrete Fourier transformation, there are only a finite number of waves and positions that enters our equations. If s is dimension of the heightfield we animate, and r is the resolution of the grid, then we can write:
K = (kx,kz) = (2pn / s,2pm /s)
where n and m are integers in bounds –r/2 £ n,m < r/2. Note that for FFT, r must be power of two. For rendering the heightfield we need to calculate the gradient of the field to obtain normals. The traditional approach of computing finite difference between nearly placed grid points may be used, but it can be a poor approximation of the slope of waves with small wavelengths. Therefore, if we can afford it (in terms of computational power) it is better to use another FFT, this time evaluating the following sum:
Now that we know how to convert field of complex numbers representing wave amplitudes and phases into a heightfield, we need a way to create the amplitudes and phases themselves. Tessendorf  suggests using the Phillips spectrum for wind-driven waves. It is defined by the following equation:
In this equation, l = v2 /g the largest possible wave arising from a continuous wind with speed v, g is the gravitational constant, Wˆ is direction of the wind and Kˆ is direction of the wave (i.e. normalized K). a is a numeric constant globally affecting heights of the waves. The last term in the equation (|Kˆ Wˆ|2) eliminates waves moving perpendicular to the wind direction. In this form, the resulting animation contains waves that adhere the wind direction, but move both with and against it, resulting in a lot of meeting waves (and opportunities for splashes and foam creation). If you prefer waves moving in one direction, you can modify this term to eliminate waves that moves opposite to the wind (i.e. the dot product is negative).
Also, to improve convergence properties of the spectrum, we can try to eliminate waves with very small length (w<<l) by multiplying the equation by the following term:
There are now two steps we have to do in order to prepare data for FFT – create the amplitudes and phases at time zero, and then animate the field. The first part can be accomplished using the equation:
Where z r and zi are two independent draws from a Gaussian random number generator with mean 0 and standard deviation 1.
Now, given time t, we can create a field of the frequency amplitudes (independently on previous time, which can be valuable):
Where w is angular frequency of wave k representing the speed at which the wave travels across the surface. You may wonder, what is the source of the right term in this equation? It’s there, because our resulting heights (result of the inverse FFT) are only real numbers (i.e. their imaginary part is equal to zero). It can be shown that for such a function, the following must holds for the amplitudes:
where * is the complex conjugate operator 1.
As you may notice, there is one last piece missing in the jigsaw, and that’s value of w for a given wave. Since we are animating deep-water sea, there is a simple relation between w and the corresponding wave-vector K:
w2 (K) = gk
Here g is again the gravitational constant and k is the magnitude of vector K.
There are several modifications to this equation, perhaps the most useful for our purpose is taking depth d into account:
Instead of modifying the heightfield directly, we will horizontally displace the positions of the grid points using the equation:
X = X + lD(X,t)
where l is a constant controlling the amount of displacement, and D is the displacement vector computed with FFT:
The value of l must be carefully chosen – if it’s too big the waves start to intersect them selves, and that certainly breaks the realism. However, detecting this situation seems to be a good way of spawning foam and spray – more on this in chapter 3.7. The difference between the normal waves and the choppy waves modification can be seen in Figure 2-1 and Figure 2-2 respectively.
The pressure forces are defined as the negative of the gradient of the pressure field of the water elements, i.e.:
Given the fact that water is a Newtonian fluid , i.e. a fluid where the stress is linearly proportional to the strain, the net viscous force (Fv) per unit volume is defined as:
Where r is density, V is velocity and L is dimension.
Now that we have all the forces acting in fluids, we will use Newton’s second law (F = mA) to describe the motion:
Now assuming uniform density we can write the equation as:
This equation conserves the momentum. In addition we need the mass to be conserved:
These two equations together are referred to as the NSE. Unfortunately the NSE is a set of highly non- linear Partial Differential Equations (PDEs) that’s not easily solved. In the literature there’s many methods for discretizating the PDE both in time (explicit/implicit) and space (Finite Difference, Finite Volume, Finite Element). Going into detail on how to solve the NSE would require a document it self, so instead we will just briefly described what we implemented and how we used the result.
We started by implementing an explicit finite difference scheme on a uniform grid known as the Marker-And-Cell (MAC) method, since this is widely used in earlier works (, , , and ). In short, we divide the solution space into finite cells that holds the velocity and pressure. We then solves Equation 2-9 by finite differences on this grid, for then to enforce Equation 2-10 by an iterative process called Successive Over Relaxation (or one can form a linear system and solve it with for example a Preconditioned Conjugate Gradient method). While solving this is rather simple in closed form, adding boundary conditions and then in particular free-surface conditions is complicated and not well described in the given references. Another problem inherent to finite differences, are stability. Although what’s know as the Courant-Friedrichs- Levy (CFL) conditions for stability can be somewhat enforced by calculating local viscosity and adjusting the time step according to the velocity and the cell size, it gave us unbelievable much pain! We therefore took the time to also implement Jos Stam’s stable solver . Once again it turns out that we can use FFT for solving the closed form, that we will use for the surface details. Stam has recently also released source code for this solver , so one should be able to get up running with this effect quite fast! Once we have a field solved with the NSE, we populate it with particles that are moved according to the bilinear interpolated velocity of the nearest grid elements. These particles will quickly form streamlines (see Figure 2-3) in the field, showing all the turbulent vorticity we expect to see on a tension surface. We then take the finite differences of these particles velocities, and treat them as tangents, for normal calculation. All these normals are then feed into a bump-map, that we apply as real-time surface detail as shown in Figure 2-4.