Introduction to the Graphics Pipeline

Published August 30, 2013 by Steven De Bock, posted by Bokke
Do you see issues with this article? Let us know.
Advertisement

Introduction

This article is mainly intended to give some introductory background information to the graphics pipeline in a triangle-based rendering scheme and how it maps to the different system components. We'll only cover the parts of the pipeline that are relevant to understaning the rendering of a single triangle with OpenGL.

Graphics Pipeline

The basic functionality of the graphics pipeline is to transform your 3D scene, given a certain camera position and camera orientation, into a 2D image that represents the 3D scene from this camera's viewpoint. We'll start by giving an overview of this graphics pipeline for a triangle-based rendering scheme in the following paragraph. Subsequent paragraphs will then elaborate on the identified components.

High-level Graphics Pipeline Overview

We'll discuss the graphics pipeline from what can be seen in figure 1. This figure shows the application running on the CPU as the starting point for the graphics pipeline. The application will be responsible for the creation of the vertices and it will be using a 3D API to instruct the CPU/GPU to draw these vertices to the screen. Graphics_pipeline1.png Figure 1: Functional Graphics Pipeline We'll typically want to transfer our vertices to the memory of the GPU. As soon as the vertices have arrived on the GPU, they can be used as input to the shader stages of the GPU. The first shader stage is the vertex shader, followed by the fragment shader. The input of the fragment shader will be provided by the rasterizer and the output of the fragment shader will be captured in a color buffer which resides in the backbuffer of our double-buffered framebuffer. The contents of the frontbuffer from the double-buffered framebuffer is displayed on the screen. In order to create animation, the front- and backbuffer will need to swap roles as soon as a new image has been rendered to the backbuffer.

Geometry and Primitives

Typically, our application is the place where we want to define the geometry that we want to render to the screen. This geometry can be defined by points, lines, triangles, quads, triangle strips... These are so-called geometric primitives, since they can be used to generate the desired geometry. A square for example can be composed out of 2 triangles and a triangle can be composed from 3 points. Lets assume we want to render a triangle, then you can define 3 points in your application, which is exactly what we'll do here. These points will then reside in system memory. The GPU will need access to these points and this is where the 3D API, such as Direct3D or OpenGL, will come into play. Your application will use the 3D API to transfer the defined vertices from system memory into the GPU memory. Also note that the order of the points can not be random. This will be discussed when we consider primitive assembly.

Vertices

In graphics programming, we tend add some more meaning to a vertex then its mathematical definition. In mathematics you could say that a vertex defines the location of a point in space. In graphics programming however, we generally add some additional information. Suppose we already know that we would like to render a green point, then this color information can be added. So we'll have a vertex that contains location, as well as color information. Figure 2 clarifies this aspect, where you can see a more classical "mathematical" point definition on the left and a "graphics programming" definition on the right. PointVSVertex.png Figure 2: Pure "mathematics" view on the left versus a "graphics programming" view on the right

Shaders - Vertex Shaders

Shaders can be seen as programs, taking inputs to transform them into outputs. It is interesting to understand that a given shader is executed multiple times in parallel for independent input values: since the input values are independent and need to be processed in exact the same way, we can see how the processing can be done in parallel. We can consider the vertices of a triangle as independent inputs to the vertex shaders. Figure 3 tries to clarify this with a "pass-through" vertex shader. A "pass-through" vertex shader will take the shader inputs and will pass these to its output without modifying them: the vertices P1, P2 and P3 from the triangle are fetched from memory, each individual vertex is fed to vertex shader instances which run in parallel. The outputs from the vertex shaders are fed into the primitive assembly stage. VertexShader.png Figure 3: Clarification of shaders

Primitive Assembly

The primitive assembly stage will break our geometry down into the most elementary primitives such as points, lines and triangles. For triangles it will also determine whether it is visible or not, based on the "winding" of the triangle. In OpenGL, an anti-clockwise-wound triangle is considered as front-facing by default and will thus be visible. Clockwise-wound triangles are considered back-facing and will thus be culled (or removed from rendering).

Rasterization

After the visible primitives have been determined by the primitive assembly stage, it is up to the rasterization stage to determine which pixels of the viewport will need to be lit: the primitive is broken down into its composing fragments. This can be seen in figure 4: the cells represent the individual pixels, the pixels marked in grey are the pixels that are covered by the primitive, they indicate the fragments of the triangle. Rasterization.png Figure 4: Rasterization of a primitive into 58 fragments We see how the rasterization has divided the primitive into 58 fragments. These fragments are passed on to the fragment shader stage.

Fragment Shaders

Each of these 58 fragments generated by the rasterization stage will be processed by fragment shaders. The general role of the fragment shader is to calculate the shading function, which is a function that indicates how light will interact with the fragment, resulting in a desired color for the given fragment. A big advantage of these fragments is that they can be treated independently from each other, meaning that the shader programs can run in parallel. After the color has been determined, this color is passed on to the framebuffer.

Framebuffer

From figure 1, we already learned that we are using a double-buffered framebuffer, which means that we have 2 buffers, a frontbuffer and a backbuffer. Each of these buffers contains a color buffer. Now the big difference between the frontbuffer and the backbuffer is that the frontbuffer's contents are actually being shown on the screen, whereas the backbuffer's contents are basically (I'm neglecting the blend stage at this point) being written by the fragment shaders. As soon as all our geometry has been rendered into the backbuffer, the front- and backbuffer can be swapped. This means that the frontbuffer becomes the backbuffer and the backbuffer becomes the frontbuffer. Figure 1 and figure 5 represent these buffer swaps with the red arrows. In figure 1, you can see how color buffer 1 is used as color buffer for the backbuffer, whereas color buffer 2 is used for the frontbuffer. The situation is reversed in figure 5. Graphics_pipeline2.png Figure 5: Functional Graphics Pipeline with swapped front- and backbuffer This last paragraph concludes our tour through the graphics pipeline. We have now a basic understanding of how vertices and triangles end up on our screen.

Further reading

If you are interested to explore the graphics pipeline in more detail and read up on, e.g.: other shader stages, the blending stage... then, by all means, feel free to have a look at this. If you want to have an impression of the OpenGL pipeline map, click on the link. This article was based on an article I originally wrote for my blog.
Cancel Save
0 Likes 7 Comments

Comments

NightCreature83

Why did you leave the blend stage out as it is of impact on even rendering a simple triangle, I can see why you left out the newer shader stages, but the blend one I can't.

August 30, 2013 08:40 AM
molehill mountaineer

Hi,

I believe it would be helpful for newbies to elaborate a little on the use of two colorbuffers for rendering.

something along the lines of:

If one was to use a single colorbuffer the scene would be composed directly on the screen (the user would see objects "pop up" as the 3D scene is being rendered). This is why two buffers are used: The current frame (that which is currently being shown on the display) is held in the frontbuffer. The frame that is being rendered is drawn to the backbuffer.

when the task of rendering the scene to the backbuffer is completed the front- and backbuffer are switched so that the backbuffer is now the front (that is, the scene you rendered is shown on the display) and the buffer which *was* the frontbuffer is now a render target for the next frame. This process of "swapping" the buffers is the reason for the name "swapchain".

I know you mentioned it in the article, just my 0.02 - Good work though

August 30, 2013 11:59 AM
Bokke

Why did you leave the blend stage out as it is of impact on even rendering a simple triangle, I can see why you left out the newer shader stages, but the blend one I can't.

I do agree that the blending stage is of high importance in the current graphics and I was in doubt of writing something or not. In the end, I decided not to.

The blending stage can be disabled in OpenGL, it isn't mandatory and I felt that it would therefore add an additional layer of needless "complexity" in how pixels "come to be" (which was the main goal of this article)

I don't think we should compare the shader stages to the blending stage, as for example the geometry shader certainly has its merits!

August 30, 2013 05:42 PM
Bokke

Hi,

I believe it would be helpful for newbies to elaborate a little on the use of two colorbuffers for rendering.

something along the lines of:

If one was to use a single colorbuffer the scene would be composed directly on the screen (the user would see objects "pop up" as the 3D scene is being rendered). This is why two buffers are used: The current frame (that which is currently being shown on the display) is held in the frontbuffer. The frame that is being rendered is drawn to the backbuffer.

when the task of rendering the scene to the backbuffer is completed the front- and backbuffer are switched so that the backbuffer is now the front (that is, the scene you rendered is shown on the display) and the buffer which *was* the frontbuffer is now a render target for the next frame. This process of "swapping" the buffers is the reason for the name "swapchain".

I know you mentioned it in the article, just my 0.02 - Good work though

You are absolutely right, the subject has been treated rather dodgy. I remember how the OpenGL Red Book, 7th edition had the example of a rotating square, where a "ghosting"-effect was visible when you rendered with only one colorbuffer. A double-buffered framebuffer resolved this quite nicely.

I'll have a look if I can improve this article with some pictures/figures to make it somewhat clearer. In the meantime, I'll leave it as an exercise to the reader :)

August 30, 2013 05:52 PM
ray_intellect

Hmm, but if you consider two colour buffers and alpha blending then you need stencil states and gamma ramps, fog, slope scale, near and far planes and suddenly the permutations have become so large that you need to write a 10 page article ... I would just add the GS and tessellation shaders then mention compute shaders as a non cooperative part, for completeness. However I suppose that as fixed function is basically obsolete you might be right to just mention the minimal vertex and fragment shaders.

August 31, 2013 01:43 AM
Liuqahs15

I was able to follow along vaguely as a beginner, but I have to admit that if I didn't already understand a buffer from 2D game programming I'd have been utterly lost. A little explanation of terms like that can be a huge difference maker.

Also, be careful of repeating yourself needlessly.

In graphics programming, we tend add some more meaning to a vertex then its mathematical definition. In mathematics you could say that a vertex defines the location of a point in space. In graphics programming however, we generally add some additional information.

Otherwise, it was beautifully concise and clear. No dragging on and on. I was actually surprised when I realized I'd finished the article. I was ready for 2,000 words. But you were able to have the article end before my attention span split in two and started beating the hell out of itself.

Thanks.

September 03, 2013 01:24 PM
Bokke

Hi,

I believe it would be helpful for newbies to elaborate a little on the use of two colorbuffers for rendering.

something along the lines of:

If one was to use a single colorbuffer the scene would be composed directly on the screen (the user would see objects "pop up" as the 3D scene is being rendered). This is why two buffers are used: The current frame (that which is currently being shown on the display) is held in the frontbuffer. The frame that is being rendered is drawn to the backbuffer.

when the task of rendering the scene to the backbuffer is completed the front- and backbuffer are switched so that the backbuffer is now the front (that is, the scene you rendered is shown on the display) and the buffer which *was* the frontbuffer is now a render target for the next frame. This process of "swapping" the buffers is the reason for the name "swapchain".

I know you mentioned it in the article, just my 0.02 - Good work though

You are absolutely right, the subject has been treated rather dodgy. I remember how the OpenGL Red Book, 7th edition had the example of a rotating square, where a "ghosting"-effect was visible when you rendered with only one colorbuffer. A double-buffered framebuffer resolved this quite nicely.

I'll have a look if I can improve this article with some pictures/figures to make it somewhat clearer. In the meantime, I'll leave it as an exercise to the reader smile.png

I decided not to modify the article, nevertheless, you can see the effect of double-buffering (No Ghosting) versus single-buffering (Ghosting) in the following 2 youtube videos I just uploaded:

No ghosting

Ghosting

September 26, 2013 09:22 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!

This article is mainly intended to give some introductory background information to the graphics pipeline in a triangle-based rendering scheme and how it maps to the different system components. We'll only cover the parts of the pipeline that are relevant to understaning the rendering of a single triangle with OpenGL.

Advertisement

Other Tutorials by Bokke

Bokke has not posted any other tutorials. Encourage them to write more!
Advertisement