*** Welcome to piglix ***

Real-time rendering


Real-time rendering is one of the interactive areas of computer graphics. It means creating synthetic images fast enough on the computer so that the viewer can interact with a virtual environment. The most common place to find real-time rendering is in video games. The rate at which images are displayed is measured in frames per second (frame/s) or hertz (Hz). The frame rate is the measurement of how quickly an imaging device produces unique consecutive images.

Graphics rendering pipeline is known as the rendering pipeline or simply the pipeline. It is the foundation of real-time graphics. Its main function is to generate, or render, a two-dimensional image, given a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures, and more.

The architecture of the real-time rendering pipeline can be divided into three conceptual stages as shown as in the figure below. These stages include application, geometry, and rasterizer. This structure is the core which is used in real-time computer graphics applications.

The application stage is driven by the application where "it begins the image generation process that results in the final scene of frame of animation. Therefore creating a base filled with simple images, that then later on build up into a bigger, more clear image". The application is implemented in the software thus giving the developers total control over the implementation in order to change the performance. This stage may, for example, contain collision detection, speed-up techniques, animations, force feedback, etc. One of the processes that is usually implemented in this stage is collision detection. Collision detection is usually includes algorithms that detects whether two objects collide. After a collision is detected between two objects, a response may be generated and sent back to the colliding objects as well as to a force feedback device. Other processes implemented in this stage included texture animation, animations via transforms, geometry morphing, or any kind of calculations that are not performed in any other stages. At the end of the application stage, which is also the most important part of this stage, the geometry to be rendered is fed to the next stage in the rendering pipeline. These are the rendering primitives that might eventually end up on the output device, such as points, lines, and triangles, etc.

The geometry stage is responsible for the majority of the per-polygon operations or per-vertex operation; it means that this stage computes what is to be drawn, how it should be drawn, and where it should be drawn. In some case, this stage might be defined as one pipeline stage or several different stages, mainly due to the different implementation of this stage. However, in this case, this stage is further divided into different functional group.


...
Wikipedia

...