There are various techniques which can function as a core for water surface rendering. Some of the most important methods are Perlin noise, Fast Fourier syntheses and Navier Stokes equations. Different water representations require absolutely different computational power and provide different level of realism. It is also possible to combine various approaches, to get to the best compromise between them. In this chapter, several solutions are introduced as possible approaches for our mission. They are based on the previously discussed mathematical background, and range from simple to extremely complex computations.

The following parts of water rendering will be discussed:

  • Water representations - we can use several methods to describe our water surface, but in the end, everything needs to be described by vertices to be able to render the result. Grids and particle systems are the most popular ways for this.
  • Water simulation approaches - which describe the water waves and can get everything in motion. The different solutions can be useful under different conditions, and complex systems can be built by combining them.
  • Reflection rendering techniques.
  • Fresnel term approximations.
  • Rendering various water phenomena. Effects, for example, splashes, caustics and Kelvin wedge are discussed in this part.

Water representations

3D Grids

Representing water by three-dimensional girds make various realistic water behavior simulation possible. The main idea is simple: we determine the physical forces and compute all their effects on the elements of the grid. Although they are easy to describe, the computations can be expensive. Physical simulations must be precise, but for rendering water surfaces, we don't need to be so accurate. If the extreme computation expense is reasonable, it can be used for rendering of small areas of water, and in this case, for example, the Navier-Stokes equations can be nicely applied. The details about physical simulations are out of the scope of this paper, I discuss here the solutions for real-time rendering. And for better performance, generally we can't afford using so complex representation.

Although 3D grids can represent only small amounts of water in real-time performance, pre-rendering calculations can be computed by them for higher realism. Rendering underwater textures, caustics formation, splashes are just some of the possible effects which can have pre-rendering phases to get higher performance during the real-time animations. Several paper writes about these possibilities (such as [1] and [2]), but they are outside the scope of this paper. For our intended use, simpler water representations are needed.

2D Grids - Hightmaps

3D grids are accurate approximations of water volumes, but if we accept some limitations, we can use a simpler solution. To be able to render the water surface, we only need to know the shape of it, in our case, how high the water is at a given (x,y) coordinate. This means that the volume can be simplified to a height-field using a function of two variables that return the height for a given point in two-dimensional space. This representation for water surfaces has restrictions compared to 3D grids. As a function can return only one value, a height field can represent one height value for a given (x,y) coordinate. This means overlapping waves can be described this way, as shown on the next figure:


As the height field stores only one height value for a coordinate like on the right image, overlapping parts of the surfaces cannot be described as on the left one.

The main advantage of 2D grids compared to 3D grids is that it is easier to use and a much simpler data structure is appropriate to store it. If the height field is stored in a texture, it is usually called height map. Similarly, the rendering process is called displacement-mapping as the original geometry is displaced by the amount stored in the height map.

Different optimization techniques are used to have better real-time performance. For example, if the height-map is defined by a continuous function, it is not needed to render and calculate the entire water surface, rendering the visible part is enough. In other scenarios, for instance, when Navier-Stokes Equations are used, every cell of the height map needs to be updated, even if they are not visible by the camera. Some optimization method is discussed in the following paragraphs.

Performance optimization

As graphic hardware processes triangles, there is no way to avoid surface tessellation. Real water surfaces are continuous, but interactive computer graphics need a polygonal representation so the surface must be discretized. More triangles describe more details in the virtual world, although more triangles are made up of more vertices. Every graphics programmer has to find the balance between complexity and performance, this means, between realism and speed.

Classical LOD algorithms

According to the Level Of Detail (LOD) concept, a complex object can be simplified to different level of details. The smaller is the object on the screen, the less detail is drawn to reduce small, distant, or unimportant geometry and to get better performance through this. The following figure demonstrates possible level of details:


If the bunny is small or distant, we cannot recognize the visual difference between the more and less detailed versions. To gain performance, if it is not visually distinguishable, only the bunny with fewer triangles is rendered:


Other kind of LOD techniques are continuous LODs. Instead of creating different levels before running, it is possible to simplify a detailed object to a desired level in run-time. We store only the most detailed bunny, and the application removes any not necessary polygons to gain performance. This way, the LOD granularity can be much better, not only the previously generated levels can be used, although the application becomes more complex. For more details on general LOD techniques, see [7].

LOD algorithms on water surfaces

LOD techniques can be applied for water rendering as well. If the water surface is made up of more triangles, for example, of a triangle strip, this strip can be optimized to result a more realistic surface than by using simple equal size triangles. The main question to answer is that how triangles should be arranged.
The following method is discussed in [16]. The 30x30-size grid (triangle strip) simulates the visible part of the water surface, this means that it is always transformed into the front of the camera. The height coordinate can be calculated with the help of continuous functions of the other coordinates (z = f(x,y)), so the place of the vertices can be chosen without any limitation to get the most realistic result. If we use equal-size triangles in the triangle strip, the distant ones have only very small visible size.
Optimization can be done by vertex replacement. The distant triangles are too small, they should be transformed to have about equal size visible by the camera. The near and far clipping planes and the width of the triangle strip needs to be taken into account, to have a water surface spread over the entire visible surface by the camera. The more problematic part is the height of the rows in the triangle strip to have equal visible sizes. The following hyperbolic function gives the horizontal place depending on the number of the rows:


Here k is the number of the row, kmax the amount of rows. The first row of triangles (k = 0) gets the horizontal distance 0, while the last row (k = kmax -1) converges to infinity.
As we do not render objects in the infinity, the coordinates of the previous equation need to be scaled. Before rendering, we surely have a near and a far clipping plane. This means, only triangles between these two planes are rendered. If Dmax is the far clipping distance, Dmin the near distance, the next equation replaces and scales the coordinates of the triangle points:


For example, if the object between the distances 0 and 1000 needs to be rendered, the water surface can be scaled to the same area by setting Dmax to 1000 and Dmin to 0.
The far grid rows must be much broader to be equal-sized visible from the camera. This is visualized on the next figure:


In [16], triangle lines are scaled to match the two sides of the viewing angle using the following formula:


Where i is the actual column number, imax is the number of the columns, d is the distance of the rows (calculated earlier), and r is the ratio between the sides of the view (for example: 4:3 or 16:9).
Useful links:
Terrain LOD: Runtime Regular-Grid Algorithms

Using projected grid

LOD algorithms determine where vertices are more frequent and where to skip them to have enough details to render a nice-looking picture real-time. But if the camera moves, the place on the screen can change where vertices are important. Projected grid algorithms ([6]) try to locate vertices smoothly in camera space through the following steps:

  1. Create a regular grid in the camera space that is orthogonal towards the camera.
  2. Project the grid to the desired plane.
  3. Transform the grid back to word-space.
  4. Apply displacement, waves etc.
  5. Render the grid, which will result a mostly even-spaced gird in camera space.

A real-world analogy can be, for example, if you put a paper with a dotted gird on it in front of the spotlight, the grid is projected onto the surface:


This grid looks regular and smooth from the position of the spotlight, and that is our goal with the vertices of the water surface as well. For more details or for a fancy application which demonstrates this technique, see [6].

In [26], an adaptive water mesh is used. The motion of the camera induces the shift of the mesh over the ocean surface to make the vertexes have approximately the same projected area on screen. They pay attention for a new problem introduced by this method. The surface normals cannot be estimated using finite differential techniques any more because of the incorrect surface approximation. To determine the normals, analytical methods need to be used. This is visualized on the next figure:


As shown on the figure, depending on how detailed the vertex grid is, analytical methods give much more accurate surface normals, than finite differential approaches.

For all the reasons I mentioned above, projected grids are an efficient technique to optimize water rendering, but they are complex, and application of them needs careful consideration.

Water simulation approaches

Coherent noise generation - the Perlin noise

The water waves can be analytically described, or we can use random-based techniques as water waves are similar to other random natural phenomena. Random noise can be basis for realistic rendering.
Ken Perlin published a method which gives continuous noise that is much more similar to random noises in nature than simple random ones. This difference is visualized on the next figures:


The source of the images is [12].

The 2D random noise on the top is generated by a simple random generator. The Perlin noise on the bottom is much closer to random phenomena in the nature.

Basic Perlin noise does not look very interesting in itself but by layering multiple noise-functions at different frequencies and amplitudes a more interesting fractal noise can be created:


The sum of them results:


The frequency of each layer is the double of the previous one which is why the layers usually are referred to as octaves. By making the noise three-dimensional animated two-dimensional textures can be generated as well. More good explanations and illustrations can be found at [13].

A detailed and easy to understand explanation of Perlin noise generation can be found in [12]. For complex details, see [14].

Using Perlin noise as a core for water surfaces needs much less computational power than techniques discussed in the following paragraphs. The main problem with Perlin noise is that it is not controllable accurately, only the wave amplitudes and frequencies are easily changeable. Interaction with external objects is also hard to describe.

Fast Fourier Transformations

While physical simulations can be really resource consuming, random-noise based solutions are not accurate enough for every purpose. As a compromise, observation based statistical models can be used as core of the water surface animation. In this model, the wave height is a variable of position and time (position means horizontal coordinates (X and Y) without height (Z)). The height can be determined through a function, a set of sinus waves with different amplitudes and phases. To quickly get the sum of these amplitudes, inverse Fast Fourier Transformations (FFT) can be used. For a detailed example, check [28]. The resulting surface can be very smooth with round waves at the top. This is not always desirable, various methods exist to add sharpness to the waves, making them look more choppy. For more details about this, see the "Creating Choppy Waves" chapter.

The Navier-Stokes Equations

Navier-Stokes Equations (NSE), as mentioned in the Water mathematics chapter, describe the motion of incompressible viscose fluids. The acting forces are gravity, pressure forces and viscose forces. The actual equations are really hard and time consuming to solve, so we need to simplify and discretize them for real-time calculations. An efficient approach can be the simulation of solid volumes of water as a height-field, modeling the flow between adjacent columns of fluid. With this method, waves and other surface artifacts do not need to be explicitly specified because they arise naturally from the physical conditions occurring within the system.

[10] describes a technique simulating volume transitions through virtual pipes that connect adjacent columns:


The vertical columns are connected to their eight neighbors through a set of directional horizontal pipes. These pipes allow the water pressure to distribute over the entire system. The control points of the grid can be sampled to separate adjacent columns. To interact with external objects, the surface can be simulated as a separate subsystem which propagates external pressure to the volume grid (separate subsystem as well):


The bottom arrows indicate the direction of the flow between columns, the vertical arrows show the upward velocity and the thin arrows can indicate the velocity vectors for particle ejection. These different subsystems can interact to form a complex fluid system, as shown on the next figure:


Although they are extremely realistic, the NSEs are resource consuming to calculate for every time-step. The grid size must be absolutely limited for real-time simulations even on the latest graphic cards. Not real-time calculations, for example, for the movie Titanic were performed on a 2048 x 2048 grid, but this size cannot be handled real-time yet. NSE can be used for simulating smaller water surfaces like pools or fountains, although implementations combining multiple rendering methods exist also. For instance, a simple vertex displacement technique for distant areas can be combined with NSE for closer interaction with external objects. I have to mention that NSE requires world space grid for the calculations, while other solutions need a different grid-space, like the previously described projected grids.

NSEs are much easier to solve over 2D grids, than on 3D girds. Generally, 2D versions are enough, but they have their drawbacks. As only vertical forces can be inserted to the system, all the external forces must be simulated with vertical approximations. This can influence the result, for example in case of wind forces, which are generally horizontal.

The source of the images is [10].

Particle systems

Physics-based approaches have become very popular recently. Improving hardware performance makes the application of real-time particle systems also possible. Depending on the issue, vertex-based and pixel-based solutions can be appropriate as well to make huge amount of independent particles seem alive. Particle system techniques can be combined with other water animation approaches to get a more realistic result.

Particle system approaches need to answer to questions: how do the particles move, and what are the particles as objects. The whole system can have a velocity, as a vector, but this vector does not need to be constant across the entire flow. The next figure visualizes this:


The answer to the second question is: our particles can be negligible in size and in mass as well. But they can carry further information to make other kind of interaction also possible, for example, color, temperature and pressure, depending on the expected result.

The particles move according to the physical laws, their motion can be calculated in time steps with the help of our previously discussed velocity-vector map. To be able to make these calculations on graphic hardware, a texture must store the place of the particles, so their place is sampled into a texture. These textures are called particle maps:


To get the place of the particles in the next timestep, we trace them just like if they moved alone along the velocity-vector map. This approach is called forward-mapping. This is illustrated on the next figure:


This described technique suffers from some problems. First, if the velocity is too small, some particles can stay in the same grid cell forever as they are assumed to start from the center of the cell in each iteration, but they can't leave the cell in one timestep, and they are relocated to the center again. Second, there might be cells which stay always empty because of the same reasons, which cause stationary particles.

To overcome these issues, backward mapping can be used instead of forward mapping. For each grid cell, we calculate which cell its particle can be originated from. Then, we determine the color of the cell using the color of the color of the original cell. If interpolation is used, the surrounding colors can be also taken into account, and we can avoid stationary particles and empty gaps as well:


Based on the previous considerations, the graphics hardware-based method to texture advection is as follows. The velocity-map and the particle-map are stored in separate textures, which have two components. A standard 2D map can be represented this way, the third dimension is added by approximations to gain performance. Offset textures are part of hardware-supported pixel operations, so the move along the velocity-field can be implemented by them. Inflow and outflow (particle generation and removal) is outside the scope of this paper. More detailed explanations and source codes can be found in [14]. The source of these illustrative figures is [14].

Rendering Reflections

Static Cube-map Reflections

If the water does not need to reflect everything, it is possible to use a pre-generated cube-map to calculate reflected colors. Cube-maps are a kind of hardware-accelerated texture maps (other approaches are for example sphere mapping and dual paraboloid mapping). Just imagine a normal cube with six images on its sides. These images are taken as a photo from the center point of the cube, and they show what is visible from the surrounding terrain through the points of the sides. An example is shown on the next figure:


As shown on the following figure, the six sides of the cube are named after the three axle of the coordinate-system: x, y and z in positive and in negative directions:


So we have a cube map and the reflecting surface of the water. We can calculate the vector for each point of the water that points into the direction of the reflected object. By using this 3-dimensional vector (the red one on the last figure), the points of the cube-texture can be addressed from the center of the cube. This vector aims exactly one point of the cube, which has the same color as the reflected object in the original environment. But this calculations are much more efficient and hardware-accelerated to match the real-time requirements, while calculating global illuminations for every reflecting point needs much more time. Using cube-maps has one more advantage: the cube has sides which represent the environment that is not visible by the camera, so even points behind the camera can be reflected. On the other hand, cube-maps needs to be pre-rendered, so it is impossible to reflect changing environment (for instance, with moving objects) if we want to meet the real-time conditions. Using this technique, sky can be easily reflected on the water surface, but a moving boat needs to be handled in another way. Additionally, artifacts can be discovered at the edges of the cube, which are really hard to avoid.

Sources of the images are: and

Dynamic Cube-maps

To be able to reflect changing environment the cube-map needs to be updated. Because cube-maps are essentially a collection of six textures on the sides, building a cube-map dynamically requires filling those textures one-by-one. We need to render the scene six times, once for each face of the cube, setting up the camera so that it matches the point of view from that particular cube-map face. Positioning the camera is not too complicated to achieve this, but the Field of View (FOV) needs to be adjusted to get equal-sized, square-shaped pictures, which see the same portion of the scene (90 degree to cover 360 degree together). Because the size of the water surfaces is relatively big compared to the environment, different objects need to be reflected in the same direction from the different point of the water. This means that a single cube-map is not enough to simulate real reflections on the whole water surface. Creating more cube-maps for every frame are extremely expensive, so dynamic cube-maps are generally not real alternatives for water reflection effects on today's graphic cards.

Although they are extremely complex, there exist some very realistic solutions, for example, in the game Half Life 2. They use more dynamic cube maps generated from various points of the water surface, and reflections are got from the stored values through weighted interpolations. To get real-time performance, the cube maps can be regenerated only a few times in every second.

Reflection Rendering to Texture

In the chapter describing water mathematics I discussed a method to determine the reflected color for every point of the water surface. One of the most precise solutions for that is creating a virtual view on the other side of the water plane, and rendering the same scene into a texture, which can be used as a reflection map later. This means, that before rendering the final image, a pre-rendering phase should be added. The place of the camera and the view-vector is mirrored onto the water plane during this phase, and every object of the virtual world which can be reflected by the water on the final image is rendered from this virtual view into a texture. Let me show the figure again:


To get the expected result, the place of point B must be calculated. For this, we have to determine how far the original place of the camera is from the water plane, so we have to determine distance k. If the water is horizontal, this distance has to be subtracted from the height of the water plane to find the height coordinate, the other coordinates of the points A and B are the same. To avoid artifacts, the underwater objects can be removed from the world before the rendering into texture. When the final image is created, this texture can be used as a reflection-map. The reflective color can be sampled by the help of the vector between the camera and the points of the water surface, and the shape of the waves can be taken into account as well. Smaller adjustments can be needed to have better results, for instance, smaller modifications of height of the clipping plane or point B can improve the reality by producing less artifacts.

Calculating the Fresnel Term

Accurate Approximation of the Fresnel Term

The operations to determine the exact Fresnel value for each pixel of the water are very costly. If the water covers a significant part of the display, calculating the accurate value is unsuitable for real-time conditions, so approximations need to be used. In this case, approximations by linear functions are inadequate due to inaccuracy. In [3] they used reciprocal of different powers which are surprisingly correct approximations. Some of these are visible on the next figure:


The red solid line shows the power of 8, the blue dashed line is the power of 7 and the green dashed line is the power of six. The difference between the analytical calculation and the approximation by the power of 8 is visualized on the next figure:


The dashed blue line is the approximations be the function 1/(1+x)8 and the red line is the analytically calculated accurate value. The values of the X axis show the cosine between the normal and the eye vector.

Simpler Solution

If the angle between the view vector and the normal vector is bigger, the amount of reflection gets higher. For this, [Riemer] used a simple approximation by projecting the eye vector on the normal vector of the water plane as shown on the following image:


The amount of refraction (refraction coefficient) can be easily calculated by the dot product of the eye and the normal vector, and the sum of the two coefficients is always 1.

A Realistic Compromise

The cheap calculations introduced in the previous "Simple solution" paragraph do not take the indices of refraction into account and have a stronger divergence from natural effect. This divergence results an unnaturally strong reflection. [15] advises a better approximation:


Where n1 and n2 are the indices of refraction for the involved materials, and α is the angle between the eye-vector and the normal vector of the surface. For air-water boundary, n1 = 1.000293 and n2=1,3333333; this means that R(0) = 0.02037f and 1-R(0) = 0.97963f. [15] visualizes the difference between this approximations and a simpler solution also (1-cos(α)):


Using Texture Lookup

To combine speed and accuracy it is possible to precalculate the values of the Fresnel term for different angles and store them in a one-dimensional texture as a look-up table. During rendering, after we calculated the dot product between the normal and reflection vector, we can find the matching Fresnel term value in the look-up table for the dot product. This way the Fresnel term can be determined in a very fast and relatively accurate way. This approach is used in [6].

Rendering various water phenomena

Generating spays using particle systems

Particle systems can be good solutions to make real-time interaction between external objects and the water surface. They can efficiently animate moving surface as well, but usually they are applied with other techniques at the same time. Flowing water, water-drops, spay, waterfalls are just some of the possible water-related topics that can be implemented through particle systems.

Sprays are modeled as a separate subsystem in [10], as mentioned earlier in The Navier-Stokes Equations chapter. When an area of the surface has high upward velocity, particles are distributed over that area. Particles don't interact with each other, they only fall back to the water surface because of the gravity, and then they are removed from the system.

[3] uses a similar particle model to simulate water spray. Simple Newtonian dynamics are taken into account: water-surface's velocity at the spawning position and some turbulence influences their initial velocity. It can then be updated according to gravity, wind and other possible global forces. Rendering is done with mixture of alpha-transparency and additive-alpha sprites. For more details and screenshots, see [3]. These previously discussed techniques can be really convincing visually for spray-simulation.

Creating Choppy Waves

The general methods discussed in these pages use randomly generated or sinusoidal wave formations. They can be absolutely enough for water scenes with normal conditions, but there are some cases, when choppy waves are needed. For example, stormy weather or shallow water where the so-called "plunging breaker" waves are formed. In the following paragraphs I will briefly introduce some of the approaches to get choppier waves.

Analytical Deformation Model

[25] describes an efficient method which disturbs displaced vertex positions analytically in the vertex shader. Explosions are important for computer games. To create an explosion effect, they use the following formula:


where t is the time, r is the distance from the explosion center in the water plane and b is a decimation constant. The values of I0, w, and k are chosen according to a given explosion and its parameters.

For rendering, they displace the vertex positions according to the previous formula, which results convincing explosion effects.

Dynamic Displacement Mapping

[25] introduces another approach as well. The necessary vertex displacement can be rendered in a different pass and later used to combine it with the water height-field. This way, some calculations can be done before running the application to gain performance. Depending on the bases of the water rendering, the displacements can be computed by the above-mentioned analytical model or, for example, by the Navier-Stokes equations as well.

Although these techniques can result realistic water formations, they need huge textures to describe the details. The available texture memory and the shader performance can limit the applications of these approaches.

Direct displacement

In [3] they compute the displacement vectors with FFT. Instead of modifying the height-field directly, the vertexes are horizontally displaced using the following equation:

X = X + λD(X,t)

where λ is a constant controlling the amount of displacement, and D is the displacement vector. D is computed with the following sum:

where K is the wave direction, t is the time, k is the magnitude of vector K and h(K,t) is a complex number representing both amplitude and phase of the wave .

The difference between the original and the displaced waves is visualized on the following figure. The displaced waves on the right are much sharper than the original ones:


The source of the image is [3].

Choppy Waves Using Gerstner Waves

If the rendered water surface is defined by the Gerstner equations, our task is easier. Gerstner waves are able to describe choppy wave forms. Amplitudes need to be limited in size, otherwise breaks can look unrealistic. A fine solution to create choppy waves can be the summation of Gerstner waves with different amplitudes and phases. Summation can be carried out through the following sum:

where ki is the set of wave vectors, ki is the set of magnitudes, Ai is the set of wave frequencies, ωi is the set of phases and N is the number of sine waves.

Sum of 3 Gerstner waves is visualized on the following figure:


The source of the image is [27].

Rendering caustics

Some caustics rendering techniques use environment mapping. However it is supported by graphic hardware, it is only good approximation in the case where the reflecting/refracting object is small compared to its distance from the environment. This means, environment mapping can be used only when the objects are close to the water surface. Objects under dynamic water surfaces need an often updated environment map, so the usability of environment maps for caustics rendering is limited.

Several approaches render accurate caustics through ray tracing methods, but generally, they are too time-consuming for real-time applications. (See [23]). Other techniques approximate textures of underwater caustics on a plane using wave theory. Although, these moving textures can be rendered onto arbitrary receivers at interactive frame rates, the repeating texture patterns are usually disturbing.

Graphics hardware has made significant progress in performance recently and many hardware-based approaches has been developed for rendering caustics. Real caustics calculation needs intersection tests between the objects and the viewing ray reflected at the water surface. Generally, the illumination distribution of object surfaces needs to be computed, but these are really time-consuming and difficult. Although, backward ray tracing, adaptive radiosity textures and curved reflectors are published methods for creating realistic images of caustics, they can't be done real time because of the huge computational cost. For more details about these approaches, see [18], [19] and [20].

[17] describes a technique for rendering caustics fast. Their method takes into account three optical effects, reflective caustics, refractive caustics, and reflection/refraction on the water surface. It calculates the illumination distribution on the object surface through an efficient method using the GPU. In their texture based volume rendering technique objects are sliced and stored in two or three-dimensional textures. By rendering the slices in back to front order, the final image is created, and the intensities of caustics are approximated on the slices only, not on the entire object. The method is visualized on the next figure:


The source of the image is: [17].

Although, this reduces computation time, it does not enable real-time caustics rendering. The caustics map cannot be refreshed for every frame using this method.

Caustics-maps show intensifies of caustics. They are generated by projecting the triangles of the water surface onto the objects in the water. The intersecting triangles influence the force of light on the object. The intensity of the caustic triangles is proportional to the area of the water surface triangle divided by the area of the caustic triangle. The more projected triangles intersect each other and the higher their intensity is at a given point, the lighter that point is. In the end, caustics map and the original illumination map is merged as on the next figure:


The source of the image is: [17].

[21] introduces a faster approach for rendering caustics. The method emits particles from the light source and gathers their contributions as viewed from the eye. To gain efficiency, they emit photons in a regular pattern, instead of random paths. The pattern is defined by the image pixels in a rendering from the viewpoint of the light. Or in another way: counting how many times the light-source sees a particular region is equivalent to counting how many particles hit that region. For multiple light sources, multiple rendering passes are required. Several steps are approximated to reduce the required resources, for example, interpolation among neighboring pixels, skipping volumetric scattering effects or restriction to point lights.

In [22], a more accurate method is described. In the first pass, the position of receivers is rendered to a texture. In the second pass, a bounding volume is drawn for each caustic volume. For points inside the volume, caustic intensity is computed and accumulated in the frame buffer. They take warped caustic volumes into account also, which is skipped in the other caustics-rendering techniques. Their method can produce real-time performance for general caustic computation, but it is not fast enough for large water areas. For fully dynamic water surfaces with dynamic lighting, their method rendered the following image at 1280 x 500 pixels with 0.2 fps:


For more details, see [22].

In [3], they optimize their approach to real-time performance. They consider only first-order rays and assume the receiving surface at a constant depth. Incoming light beams are refracted, and the refracted rays are then intersected against a given plane. The next figure illustrates the method, it shows the projection of four water surface triangles:


To reduce the necessary calculations, only a small part of the caustics-map is calculated, and they show a method to tile it for the entire image seamlessly. Finally, the sun's ray direction and the position of the triangles are used to calculate the texture-coordinates by projection. For further discuss on this method, see [3].

The main ideas of caustics rendering were briefly introduced. The accurate methods use ray tracing techniques, but they cannot produce real-time performance without cheating. The most often used approaches use pre-generated caustic textures and try to avoid the visible repetition.

Foam generation

To get the most realistic foamy waves particle systems are the best approach. Although they can simulate every property of the foams, only for small water surfaces can they be efficient enough. Other methods need to be taken into consideration.

The main idea for foam generation in the water surface rendering literature is the application of precalculated foam-texture. The choppiness of waves is evaluated, and on the places where it exceeds a specific level, foam-texture is blended to the final color. In [Bibliography item UVDfRWR not found.], they use the following formula to calculate the transparency of the foam-texture:


where Hmax is the height at which the foam is at maximum, H0 is the base height, and H is the current height.

If the foam-texture is animated, it can show the formation and dissipation of the foam also. In [3] they don't animate the texture, but its transparency is always recalculated. The alpha value decreases continuously, but if the choppiness is high enough, the alpha is increased through some frames to get a good visual result.

The limitations of this technique are the texture repetition and the shortage of motion. The repeating patterns can be noticed because they are the same everywhere. The other problem is that the foam doesn't move on the water surface according to its slope.

The Kelvin wedge

Producing this phenomenon is easier if the bases of the water rendering system is capable of receiving outer forces, like for example FFT and Navier-Stokes equations do. In [24], a different approach is used as core of the wave simulation. Their solution uses the motion vector between two picture frames to calculate how the water height-field is to be altered for the following frame. An additive contribution is computed for each swimming object. They got a very realistic result in [24], as shown on the next figure:


This idea can be also implemented with FFT based systems as well. Waves behind any moving object can be described and these patterns can be added to the system if necessary. The most important parameters are the speed of the boat and the type and depth of the water.

If a water-rendering system uses Navier-Stokes equations, the Kelvin wedge can be produced by adding external forces to the system. Only experimentation is needed to get realistic results in various cases.

1. Kei Iwasaki, Yoshinori Dobashi and Tomoyuki Nishita - An Efficient Method for Rendering Underwater Optical Effects Using Graphics Hardware – COMPUTER GRAPHICS FORUM, 2002.
2. Geoffrey Irving, Eran Guendelman, Frank Losasso, Ronald Fedkiw - Efficient Simulation of Large Bodies of Water by Coupling Two and Three Dimensional Techniques, ACM SIGGRAPH 2006
3. Lasse Staff Jensen, Robert Goliáš Deep-Water Animation and Rendering - full source reference
4. Meshuggah Demo and Effect browser
7. Morgan Kaufmann: Level of Detail for 3D Graphics
12. Matt Zucker - The Perlin noise math FAQ
14. Wolfgang F. Engel - Direct3D ShaderX
15. Wolfgang F. Engel - ShaderX2: Shader Programming Tips & Tricks with DirectX 9
16. Game development laboratory material 2, BME, AUT
17. Kei Iwasaki1, Yoshinori Dobashi and Tomoyuki Nishita - A Fast Rendering Method for Refractive and Reflective Caustics Due to Water Surfaces
18. J. Arvo, “Backward Ray Tracing," SIGGRAPH
19. P.S. Heckbert, “Adaptive Radiosity Textures for Bidirectional Ray Tracing," Proc. SIGGRAPH
20. D. Mitchell, P. Hanrahan, “Illumination from Curved Reflections," Proc. SIGGRAPH
21. Chris Wyman, Scott Davis: Interactive Image-Space Techniques for Approximating Caustics
22. Manfred Ernst, Tomas Akenine-Möller, Henrik Wann Jensen: Interactive Rendering of Caustics using InterpolatedWarped Volumes
23. Mark Watt: Light-Water Interaction using Backward Beam Tracing
24. J. Loviscach: A Convolution-Based Algorithm for AnimatedWater Waves
26. Damien Hinsinger, Fabrice Neyret, Marie-Paule Cani - Interactive Animation of Ocean Waves
28. Greg Snook - Real-Time 3D Terrain Engines Using C++ and DirectX 9 (CHARES RIVER MEDIA INC.)
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License