• Advertisement
Sign in to follow this  

Regarding the theory behind shaders

This topic is 2413 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello Everyone,

i have started a little bit of shader programming, with GLSL. In most books or tutorials, i could find the way to program shaders. But i would like to know why shaders are exactly being used, what is wrong with the fixed pipeline architecture, and how exactly shaders came into being. Are there any resources that i can find that would help me learn more about the theory behind shaders?

Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
I suggest real time rendering which would explain all your questions.
graphics shaders: theory and practise is also a good book.
the official glsl book as well.

I would also suggest to start learning 4.0 or 4.1 opengl/shaders since its most recent.
And its actually more convinient for the programmer as well.

Share this post


Link to post
Share on other sites

Hello Everyone,

i have started a little bit of shader programming, with GLSL. In most books or tutorials, i could find the way to program shaders. But i would like to know why shaders are exactly being used, what is wrong with the fixed pipeline architecture, and how exactly shaders came into being. Are there any resources that i can find that would help me learn more about the theory behind shaders?

Thanks in advance.


The problem with the fixed pipeline is that its fixed and thus not very flexible, there is a set of functions available that you can use to render things and thats it, With shaders you have much more control.

The vertex shader basically outputs the vertex position in viewspace + other vertex attributes you need, the shader can take any data you want as input, (normally you take the vertex worldspace position and other attributes plus the model, view and projection matrices and transform the vertex to screenspace but you can send in other data and transform it in other ways aswell) , This means you can do for example hardware skinning (Pass in the weights for each bone and their transformation matrices and use those to transform the vertex aswell before transforming it to viewspace) or anything else you can think of. (The pixel shader works similarily but gets its input from the vertex shaders output (+ any uniform data you send in yourself) and outputs the final depth, color, etc of a pixel (How you calculate the final pixel color in your shader is up to you so again its far more flexible than the fixed pipeline where your options are restricted to the methods provided by the API/driver).

Anything you can think of doing with shaders is theoretically possible to do with a fixed pipeline aswell but hardware or driver developers would have to explicitly add support to the fixed pipeline for pretty much everything (Hardware skinning would require the fixed pipeline to support multiple weights(one per bone) for each vertex and additional transformation matrices(also one per bone) for example and the support for that wouldn't be usable for that much else and more complex effects would be even less flexible).

Share this post


Link to post
Share on other sites
It wouldnt be able to be managed. Imagine you have your terrain shader that blends all kinds of grass and dirt. Then you want to add a light map to that. Then you want to be able to normal map it. Then you want it to be run based of a height map that "pushes" the terrain up. Those are 4 different options to make up 4x3x2x1 combinations. Imagine having to write a program that takes care of all of those things.

Say you have terrain and you just want grass. Or grass and dirt. or grass, dirt and sand. There are just so many combinations. If you have a shader, you know that you only need to blend 2 or 3 or 4 textures and you can blend them exactly how you want to. Assuming you understand heightmaps, they have a scale factor applied. So bright white gets multiplied by your max terrain hight. In a shader you can just multiply, but if it were fixed function to do all that stufff I just said it would be like.

StartTerrainRendering();
Enable2BlendTextures();
UseTerrainLightMap();
SetTerrainMaxHeightMapScale();
.............
...............
.............
And then for every new cool feature Crysis 3, 4,5 come out with you would have to turn on/off more and more combinations of junk. Fixed function itself is a giant shader. It has to check how to render a pixel. What color (glColor) is on, is lighting enabled. So it is easier to say hey I want these 5 specific terrain features, Instead of turning of 100 of them and enabling the 5 that you want.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement