Fixed Pipeline v/s Programmable Pipeline

Started by
6 comments, last by cignox1 16 years, 6 months ago
I frequently come across "Fixed Pipeline" and Programmable Pipeline. in literature. What is "fixed" in fixed Pipeline ? and what is "programmable" in Programmable? What are other major difference between them. Why fixed pipeline is completely ignored today. I am beginner in Games Programming.I am also a student Computer Science,learning basics of Graphics. Thanks in advance
Advertisement
The "fixed function" pipeline refers to the older generation pipeline used in GPUs that was not really controllable -- the exact method in which the geometry was transformed, and how fragments (pixels) acquired depth and color values were built-in to the hardware and couldn't change.

Modern GPUs have a programmable pipeline; those previously rigid, possibly-burned-into-the-chip stages of the pipeline (transformation, shading) have been replaced with stages that can be controlled by bits of user-supplied code called "shaders." This is a far more flexible approach that has enabled a wide variety of graphics effects that were not previously possibly without, for example, hardware support and extensions specifically dedicated to enabling them. The older, fixed-function pipeline has no real advantages over this one, and so it's being phased out.
Quote:Original post by jpetrie

Modern GPUs have a programmable pipeline; those previously rigid, possibly-burned-into-the-chip stages of the pipeline (transformation, shading) have been replaced with stages that can be controlled by bits of user-supplied code called "shaders."


Does it mean that you can define your own techniqies
for transformation and shading.

So my understanding of Programmable Pipeline is like this:

Modern GPUS has built in algorithm for transformation Projection and shading which can replaced with user defined one using shader langauages. Is it right? If not correct me.








>> What is "fixed" in fixed Pipeline ?
In the "old days" (4/5 years ago or maybe even more), OpenGL/DirectX could be setup in such a way that a few special techniques such as multitexturing or dot3 bumpMapping were activated. IF the card supported that. The effects were still limited though, either the API didn't have the instructions, or the video card simply couldn't do it.

>> and what is "programmable" in Programmable?
Since there are programmeable GPU's, the programmer can write a "little" program (vertex/pixel shaders) and put his own instructions in it. This allows a much more flexible way to create advanced effects. You are not limited anymore to special features of a card, or the API. Well, in some ways you still are (cards have limited memory/texture channels/instruction limitations, etc.), but comparing to the old style, its a huge relief. Let me give you a little example. I could make a "negative rendering" shader:
  color = tex2D( texture, uvcoords ); // get a pixel from a texture  color = 1 - color; // invert it

As you can see, I can do crazy math with my colors in a pixel shader. This simply was not possible earlier, unless OpenGL/DirectX had a command for it, and a card that supported this.

>> What are other major difference between them.
Maybe the fixed pipeline was a little bit faster in the beginning. I doubt if it still matters now. One important change is that you create most of the graphic effecst (lighting/texturing/...) in a shader now, instead of the API. This makes the role of the API somewhat less important (easier to switch between the two possibly, although I never tried that). On the other hand, you need tools/editors now to create your shaders. 8 years ago most games just had a texture and maybe a lightMap for the surfaces / models. Now you have to define much more parameters (if you want to do more advanced effects), which requires another way of programming / designing the graphics engine.

>> Why fixed pipeline is completely ignored today.
Flexibility, and much more is possible. It's that simple.

[edit]
>> Does it mean that you can define your own techniqies
>> for transformation and shading.
Yes. In a vertex shader you can play around with the vertex coordinates/normals, and in a pixel/fragment shader you tweak your colors that appear on the screen just the way you like it.


Ow, I forget one important note about the differences.
In the fixed pipeline, a lot of work had to be done on the CPU. With GPU's, you can move that work to the videocard processors (which are specially designed for that stuff, and thus VERY quick with it). For example, animating a character with a skeleton. First we needed to do all the transformations on the CPU. Now we can do it in a vertex shader. That saves time on the CPU, which you can use for other (non-graphical) stuff. Physics or AI for example.

Hope that helps,
Rick


Thank you Rick. I could learn few things from your post.

I have one more clarification.

I learn that GPU is a data parallel processor, where as CPU is a
serial processor,

So what difference it will make on rendering.

Is it necessary to learn parallel programming to do programming in GPU?



No its not...
I don't know if it's called like that. But yes, a videocard has 2 GPU's that work at the same time. 1 is doing the geometry (the vertex shader), the other fills the screen (pixel/fragment shader). They still depend on each other (you can't draw pixels from a polygon as long as there is no polygon), but the fragment GPU can do drawing while the other calculates the next thing. Well, I don't know in detail how it exactly works, you should ask an expert about that.

A normal CPU can work parallel as well, but then you need to tell where each process/thread should run. Don't worry about "parallel programming" in a shader though. The fragment GPU just does another task the the vertex GPU, so you can't mix them or tell that you want formula X to be calculated on the other GPU. Basically you write 2 programs. The only relation they have, is the data you pass from the vertex shader to the fragment shader. Cards have a number of registers where you can put data in (the register count depends on the card / Shader Model (1.0, 2.0, etc.) ).

In common, the vertex shader is doing less than the fragment shader. So, that allows to make the vertex programs bigger. If the fillrate (brawing the screen pixels) is slowing down your program, it's recommended that you remove instructions from your pixel shader. If possible, you should move instructions from the fragment shader to the vertex shader, since that GPU has more spare time. Some examples:
- Let the vertex shader calculate camera/vertex direction angles (often needed in a fragment shader for lighting). The fragment shader now only has to interpolate.
- Distance between vertex and light/camera (needed in fragment shaders for fog, attenuation, ... )
- Calculate / adjust texture coordinates for the fragment shader inside the vertex shader (animated water, projective texturing, etc.)

You can't do everything inside a vertex shader though. You will discover that textures are requires for many operations (texturing, normalMapping, reflections with a cubeMap, mirrors, heightMaps, and so on). A vertex shader can't pick a pixel from a texture. Well, with Shader Model 3.0 you can, but it's still limited. You only know 1 vertex or texture coordinate inside the vertex shader. In a fragment shader, all coordinates between the 3 vertices are interpolated, but this is not possible in the vertex shader. You could use a texture for heightMapping (a terrain) in the vertex shader.

If you really want to know more, there are plenty of examples. Although it might be a little bit difficult to find simple examples now. I learned it years ago via a example kit from the nVidia website that showed relative simple (Cg) shaders. It also has papers about shading languages, what they do, how it works, etc.

Greetings,
Rick
Quote:Original post by serious_learner07

Does it mean that you can define your own techniqies
for transformation and shading.


It is quite easy to note it yourself: just look at those fancy effects of modern games. If you played HL2 you may note a 'refraction' effect on some window. This is an example. What about Fear? Did you recall when you have those 'extrasensory' experience, with the 'tuneling' effect? Or, the 'slow motion' effect?
In many games you have blooms effects (where highly luminous points seem like blurred). Another example of the pixel(fragment) shader...

Quote:
So my understanding of Programmable Pipeline is like this:

Modern GPUS has built in algorithm for transformation Projection and shading which can replaced with user defined one using shader langauages. Is it right? If not correct me.


AFAIK modern GPUS has no builtin shading at all. The driver simply use a standard shader for the fixed pipeline...

This topic is closed to new replies.

Advertisement