Complex Ocean Shader with Example


To be able the demonstrate the differences between DirectX SDK version, I chose today's latest DirectX SDK instead of XNA Development Kit. There exist different functions, different classes and the entire development needs different approaches in these two development framework.

In XNA we use instances of the Model class to handle an objects which needs to be rendered, while in DirectX SDK the Mesh class is used for the same purpose. The Model class is more complex, various Mesh levels can be handled by it together, but in the DirectX SDK there exist several functions to create an object and its properties, while in XNA we can only import models or directly determine the attributes of the model one by one. Because of this, it is more comfortable to work with the DirectX SDK now, but XNA will easily replace the support for managed development of the DirectX Development Kit later.

In this demo application I demonstrate the steps to create a beach with waves. The main goals are:

  • Creating waves by vertex shader which are going towards the beach
  • Applying some other compounds sinusoidal waves to distort the regular wave shapes
  • Adjusting the wave speed, colors, daytime, reflectance etc. by user input.
  • Adding some foam texture for more realistic result

Different approaches are needed to handle spray, godrays, exact water color etc, they are out of the scope of this demo application. For specific purposes, the techniques discussed in chapter Water Mathematics can be added get more realistic scenes, but they need always be fine tuned not to destroy the other effects.


Vertices of the water surface

I created the vertices of the water surface according to the chapter LOD algorithms on water surfaces. The grid is always placed in front of the camera to cover only the necessary part of the scene. The performance profit is enormous, the experimented with huge grid sizes, for example in the final version I use 100 x 100 = 10000 vertices. This size can be handled real-time, but actually I don't render any unnecessary objects, the GPU works only on my water surface. The latest graphic cards are much faster than the one I use for development (Ati Radeon 9600), but generally other elements are also calculated and rendered not only the ones which are important for the water surface. The optimal size of the grid always depends on the target hardware and on the expected realism.

The approach discussed in [BMELAB2] determines the horizontal place of the vertices relative to the camera only once at the beginning. This means that if we change the angles between the camera and the water surface, we can extend the visible area to places which are not covered by the grid. This can be avoided by application of projected girds as the chapter "Using projected girds" describes. The demo application for that technique is also available on-line, I don't need to demonstrate the same approach in my application. I simply don't allow to change the angle between the camera and the grid to handle this problem.

The place of the vertexes are determined (also based on the equations described in the chapter LOD algorithms on water surfaces) by this lines of code to conform the window size, aspect ratio and the size of the sky-dome as well:

float aspectRatio = Camera.AspectRatio*1.55f;

float d = 4000f / (nx - 1.0f) * (1.0f / (1.0f - (float)i / nx) - 1.0f);

Vector3 pos = new Vector3(d, aspectRatio * d * (float)(j - (ny - 1) / 2f) / ny, 0);

return (object)new CustomVertex.PositionOnly(pos);

The aspect ratio determines the distance between the vertices across the viewing direction, while the variable d helps to determine to coordinates along the viewing direction to cover equal size by the triangles on the final picture.

Finally, to get the expected result, the water is rendered always into the front of the camera. The camera directions form a vector which is used to generate an inverse of the camera matrix. This inverse is used to transform the place of the water grid always according to the place of the camera:

Vector3 pos = new Vector3(Camera.Position.X, Camera.Position.Y, 0);

Vector3 dir = Vector3.TransformCoordinate(

new Vector3(Camera.Direction.X, Camera.Direction.Y, 0), Matrix.RotationZ((float)Math.PI / 2));

Game.Device.Transform.World = Matrix.Invert(Matrix.LookAtLH(pos, pos + new Vector3(0, 0, 1), dir));

The result can be seen on the next figures:

Surface verticessurface-vertices-in-use

The figure on the left shows the alignment of the triangles which are in use on the right figure.
Waves - Getting everything in motion

The vertical place of the grid elements is determined by the vertex shader. It does not change the horizontal places, only by changing the Z coordinates are the waves formed. The tricky part is the equations of the displacement. Although there are complex frequency filtering techniques which can be used to generate waves going always towards the beach, they are out of the scope of this paper. I used an easier approach: my island is almost circle shaped, and I generate dominant radial waves going towards the centre of the beach. This resembles the nature waves, which are generally going towards the dry-land.

The dominant radial waves are going to a center point which changes with the size of the island. The phase of the waves depends on the distance from the center point, and on two other variables influencing the space frequency and the time frequency of the waves.

//radial waves
float distance = length(float2(pPos.x-64*xIslandScale,pPos.y-64*xIslandScale));
float Phase = distance * -rSpaceFreq + Time * 2 * -TimeFreq;

The height of the vertices is calculated by certain power of sinusoidal waves, which results steep waves on both sides. To reduce the slope on the back side of the waves, the amplitudes are reduced depending on the wave phase (if the product of the sine and cosine of the phase is less than 0, we are on the back side of the wave). To avoid negative waves the sine function is clamped to have only positive values. A correction term is also modifying the final result to allow the user adjusting the water height:

//Calculating waveheight
float Cos,Sin;
int power = 7;
float temp2 = 1;
if (Cos*Sin < 0) temp2 = 2;
float WaveHeight = clamp(pow(Sin,power)*rAmplitudes*temp2-(rAmplitudes*(temp2-1)),0,rAmplitudes) + (xIslandScale*128-distance)/xWaterSlope/2+xWaterLevel;

At this point we have absolutely regular waves going towards the centre point of the island, but we need to add some smaller going in different directions to have a more realistic result. I add four waves with different amplitudes and phases going in different direction to simulate the random-like waves of natural water surfaces. The four waves can be handled together as a vector (float4). As we always need to perform the same operations on them, it is the same efficient to use calculate the effect of one wave as four of them. We can calculate the height of the sum of four sinusoidal waves with the following lines (the wave directions and the frequencies are vectors with four elements):

//calculating the correction
float4 crrectionPhase = (WaveDirX * pPos.x + WaveDirY * pPos.y) * SpaceFreq + 10 * Time * TimeFreq;
float4 cCos,cSin;
float correctionHeight = dot(cSin,Amplitudes/2);

We need to add this correction term to the wave height and calculate the normal vectors to allow nice visual effects applied in the pixel shader. The screen-shot of the result is visulaized on the next figure:

Radial waves

Note that the image shows the center point of the waves. The radial waves are the most significant; the other waves only influence the locel view - the pattern of the surface from this distance.

Some video demonstration:

Optical effects

Reflection is one of the most significant optical effects needed to render ocean surfaces as well. I tried two different method, rendering reflection using cube-maps and applying the same technique as in the lake-water shader. Like reflection maps, refraction-maps are pre-generated in a separate rendering phase.

float2 RefractionSampleTexCoords;
RefractionSampleTexCoords.x = IN.RefractionMapSamplingPos.x/IN.RefractionMapSamplingPos.w/2.0f + 0.5f;
RefractionSampleTexCoords.y = -IN.RefractionMapSamplingPos.y/IN.RefractionMapSamplingPos.w/2.0f + 0.5f;

float4 refractiveColor = tex2D(RefractionTextureSampler, RefractionSampleTexCoords-newNormal*0.2f);
float4 reflectiveColor = tex2D(ReflectionTextureSampler, RefractionSampleTexCoords-newNormal*0.2f);
The normal vectors of the triangles are distorted by a bump-map to have more random surface normals, like water surfaces have.
float3 normal = tex2D(BumpMapSampler,IN.texCoord0.xy)* 2.0 - 1.0 + float3(0,0,7);
float3 newNormal = normalize(IN.norm + normal*10);

Using this normal vector, ambient and specular light are also calculated:

//lighting factor computation
float3 LightDirection = normalize(float3(-13,-2,4));
float lightingFactor = 1;
if (xEnableLighting)
lightingFactor = saturate(saturate(dot(IN.norm, LightDirection)) + xAmbient); //newNormal volt

float3 lightDir = normalize(float3(3,12,4));
float3 eyeVector = normalize(;
float3 halfVector = normalize(lightDir + eyeVector);
float temp = 0;
temp = pow(dot(halfVector,newNormal),30);
float3 specColor = float3(0.98,0.97,0.7)*temp;

Some dull color depending on the wave phase influences the final result to have more realistic result:

float3 dullColor = float3(0.1,0.25,0.5);
//float dullFactor = 0.2f;
float dullFactor = xDullFactor;
dullFactor = saturate(dullFactor * (1+IN.phase*IN.phase*IN.phase*IN.phase));

finalColor = finalColor *(1 - dullFactor) + dullColor *dullFactor;

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License