Jump to content

  • Log In with Google      Sign In   
  • Create Account


Regarding the theory behind shaders


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 muk45   Members   -  Reputation: 100

Like
0Likes
Like

Posted 14 June 2011 - 02:24 PM

Hello Everyone,

i have started a little bit of shader programming, with GLSL. In most books or tutorials, i could find the way to program shaders. But i would like to know why shaders are exactly being used, what is wrong with the fixed pipeline architecture, and how exactly shaders came into being. Are there any resources that i can find that would help me learn more about the theory behind shaders?

Thanks in advance.

Sponsor:

#2 wildboar   Members   -  Reputation: 281

Like
1Likes
Like

Posted 14 June 2011 - 02:34 PM

I suggest real time rendering which would explain all your questions.
graphics shaders: theory and practise is also a good book.
the official glsl book as well.

I would also suggest to start learning 4.0 or 4.1 opengl/shaders since its most recent.
And its actually more convinient for the programmer as well.

#3 muk45   Members   -  Reputation: 100

Like
0Likes
Like

Posted 14 June 2011 - 02:58 PM

Thanks a lot for the suggestion.

#4 SimonForsman   Crossbones+   -  Reputation: 6041

Like
1Likes
Like

Posted 14 June 2011 - 03:56 PM

Hello Everyone,

i have started a little bit of shader programming, with GLSL. In most books or tutorials, i could find the way to program shaders. But i would like to know why shaders are exactly being used, what is wrong with the fixed pipeline architecture, and how exactly shaders came into being. Are there any resources that i can find that would help me learn more about the theory behind shaders?

Thanks in advance.


The problem with the fixed pipeline is that its fixed and thus not very flexible, there is a set of functions available that you can use to render things and thats it, With shaders you have much more control.

The vertex shader basically outputs the vertex position in viewspace + other vertex attributes you need, the shader can take any data you want as input, (normally you take the vertex worldspace position and other attributes plus the model, view and projection matrices and transform the vertex to screenspace but you can send in other data and transform it in other ways aswell) , This means you can do for example hardware skinning (Pass in the weights for each bone and their transformation matrices and use those to transform the vertex aswell before transforming it to viewspace) or anything else you can think of. (The pixel shader works similarily but gets its input from the vertex shaders output (+ any uniform data you send in yourself) and outputs the final depth, color, etc of a pixel (How you calculate the final pixel color in your shader is up to you so again its far more flexible than the fixed pipeline where your options are restricted to the methods provided by the API/driver).

Anything you can think of doing with shaders is theoretically possible to do with a fixed pipeline aswell but hardware or driver developers would have to explicitly add support to the fixed pipeline for pretty much everything (Hardware skinning would require the fixed pipeline to support multiple weights(one per bone) for each vertex and additional transformation matrices(also one per bone) for example and the support for that wouldn't be usable for that much else and more complex effects would be even less flexible).
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#5 dpadam450   Members   -  Reputation: 885

Like
1Likes
Like

Posted 15 June 2011 - 01:58 PM

It wouldnt be able to be managed. Imagine you have your terrain shader that blends all kinds of grass and dirt. Then you want to add a light map to that. Then you want to be able to normal map it. Then you want it to be run based of a height map that "pushes" the terrain up. Those are 4 different options to make up 4x3x2x1 combinations. Imagine having to write a program that takes care of all of those things.

Say you have terrain and you just want grass. Or grass and dirt. or grass, dirt and sand. There are just so many combinations. If you have a shader, you know that you only need to blend 2 or 3 or 4 textures and you can blend them exactly how you want to. Assuming you understand heightmaps, they have a scale factor applied. So bright white gets multiplied by your max terrain hight. In a shader you can just multiply, but if it were fixed function to do all that stufff I just said it would be like.

StartTerrainRendering();
Enable2BlendTextures();
UseTerrainLightMap();
SetTerrainMaxHeightMapScale();
.............
...............
.............
And then for every new cool feature Crysis 3, 4,5 come out with you would have to turn on/off more and more combinations of junk. Fixed function itself is a giant shader. It has to check how to render a pixel. What color (glColor) is on, is lighting enabled. So it is easier to say hey I want these 5 specific terrain features, Instead of turning of 100 of them and enabling the 5 that you want.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS