The three-dimensional computer graphics has had and is still having a large circulation, thanks to video cards to accelerate the applications that use it: video games, multimedia titles, high-resolution, software-aided design (CAD) and software for processing graphic data and medical science. Underlying all these applications lies a 3D graphics engine is a software that can display a three-dimensional scene in real time. The engine must display the real-time three-dimensional scene in a more fluid way, otherwise it comes to software rendering. In this series of articles we will build a motor-dimensional step by step, in this first episode we see the theoretical aspects that must be our framework.Requires the typical programming skills in VC/C++ and a bit ‘of analytic geometry.
The 3D engine to display in the shortest possible time a three-dimensional scene. Each scene consists of objects and each of them is composed of polygons, usually triangles are used because they are faster to draw. A polygon is characterized by vertices, ie points in three dimensional space, the classic example x, y, z with the right hand we were taught in school. An engine must receive three-dimensional scene data and draw it by making the so-called rendering. The scene is the result of the development of various objects, one speaks of graphics pipeline, such as a series of steps for constructing the final product. The first step is to determine the visible surface, then we have the geometric transformations, scene lighting, camera positioning, and finally the drawing of polygons.
Determination of the visible part of the scene
A typical three-dimensional scene is formed by tens of objects, in the simplest case, each of these objects is formed in turn by thousands of polygons. Having to build a scene composed of millions of polygons and especially in real time would put a strain on any CPU and GPU, so you send only the portion of the pipeline to the visible scene, Potentially visible set (PVS). The operation that leads to locate the PVS is called visible surface determination (VSD), some of these techniques are: frustum culling, binary space partitioning tree, octree, and portal rendering.
The geometric transformations
After selecting the parts to make it visible should draw them, they are built using a local reference system, only then will undergo translation, rotation and scale transformations to insert them into the scene, also called a virtual world.
Lighting the scene
To make more realistic the virtual world can be simulated lighting, there are various techniques to illuminate objects with different types of light, arguments already examined in this blog in the graphics category. In general, we take the normal of each vertex of the polygons, one calculates the light intensity of each vertex is obtained and the color of the polygon based on the mixture of the colors of its vertices.
Positioning the Camera
The view from our virtual world is obtained through 2 carriers, and view up. The first has the origin at the point where the camera is located and heads toward the eye of the observer, the second always originates from the point of the camera, but points towards the highest point in the vertical plane. Normally, the carrier has up the y-axis direction, but it is possible to rotate it to do in order to simulate the rotation of the head.
Draw polygons
Our virtual world is three dimensional, but the monitor is two dimensional. It should then do a perspective projection of the vertices of the objects in the scene.
In this article I discussed the theoretical part of what we create in terms of source code. I invite you to study these aspects, introduced only by me, doing research on the internet (or Googlando Bingando), I come to comment that I have not explained something well.Depending of the fans who read this article I will see will decide to continue the course at my discretion.