The only thing you need to create an image from a virtual world is the order a color to every pixel on your display. This sounds easy, but for every pixel you need to determine which object is visible (which object is the nearest from the covering ones), what kind of light this surface gets and you need to calculate the real color of this point from all different kind of data, for instance the different angles (viewing angle, place of the light sources, normal-vector of the surface etc). These calculations have every mathematical background already, you just have to implement them to get surprisingly realistic images very soon. Though, you can have some difficulties if you want to use these so-called ray-tracing techniques for motion pictures instead of still images. The main problem is that you want to order a color to every picture through these steps very fast, but it is impossible to repeat this calculations for every row (e.g.: 900) and every column (e.g.: 1440) of the image 30 times in a second to get a continuous video which needs 900 x 1440 x 30 = 3,888,000 iterations in each second through those time consuming mathematical operations. (The time is not an important factor if you do not need continuous motion picture. For still images you can rely on any picture-producing method.) To accomplish this challenge you need to use quicker alternatives, simplifications and heuristics to make a compromise between faster calculations and more realistic images.

Tessellation of the virtual world

Virtual worlds comprise different surfaces and every surface is formed by triangles. The reason for this is that the most geometrical operations modify the base of the polygons, for example they transform a circle into an ellipse, a square into a different parallelogram. Polygons can lose all of their original attributes this way which makes them change during procedures and almost impossible to describe them with simple mathematical elements. (For instance, a circle can be described by a center point and a radius. A projection transforms it into an ellipse, and you cannot describe the ellipse with a center point any more, it needs to use two fixed points: focus.) But if you carry out the same operations on triangles, the result will be triangles and you will not lose the original form of the target. Hence, every geometrical object is tessellated into triangles (also called as triangulation) and all the procedures are carried out on these triangles, whose points are called vertices. An example of polygon triangulation is visible on the next figure:


The source of the image is


Before DirectX 8.0 graphical processors were operating only a fixed processing pipeline which limited the arsenal of every kind of rendering technique. The introduced “programmable pipeline” added a new weapon into the hand of the programmers. The essential innovation was the two shaders: the vertex shader and the pixel shader. The vertex shader operates on every vertex of the virtual world while the pixel shader does the same for every pixel.

Vertex shaders are run once for each vertex given to the graphics processor. They transform the 3D coordinates of the vertices to the 2D coordinates of the screen. They manipulate properties such as position, color and texture coordinate, but cannot create new vertices. The output of the vertex shader goes to the next stage of the pipeline.

Pixel shaders calculate the color of every pixel on the screen. The input to this stage comes from the rasterizer, which interpolates the values got from the vertex shader to be able to determine not only the color of vertexes but all the pixels as well. Pixel shaders are typically used for scene lighting and related effects such as bump mapping and color toning.

The newly introduced Geometry shaders can add and remove vertices from a mesh. (Objects in the virtual world are formed by meshes.) Geometry shaders can be used to procedurally generate geometry or to add volumetric detail to existing meshes that would be too expensive to calculate on the CPU for real-time performance. If the geometry shader is present in the pipeline, it is a stage between the vertex shader and the pixel shader. Geometry shaders are only available in DirectX 10 and later.

DirectX Versions

On the following pages I try to introduce the basics of DirectX 9 and 10. If you are not familiar with the stages of the rendering pipeline, you should start with DirectX 9:

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License