GLSL just-started questions

Started by
5 comments, last by V-man 16 years, 6 months ago
Hi, I've just started learning the GLSL and am still to implement my very first shader program. I only know the concepts introduced in the red book in chapter 15, so please bear with my ignorance. :) As far as I understand it there can be only one shader program running at a time, which may have many shader objects attached. Now, suppose you have a scene that consists of many models and, say, a couple of non-ambient lights that are always on and that each model has a shader effect of its own. How on earth do you run the lights shaders as well as a shader for each of the models? The only way I think this can be done is by attaching/dettaching shader objects to the running shader program as the renderer walks through the scene graph, though I want to believe I am wrong as it doesn't sound all that efficient. Also, how can a shader for a point light and another for a spot light, for instance, run at the same time? Must each of the shaders loop through all enabled lights and inspect their properties? ie. light_position.w==0, light is directional.
Advertisement
How do you draw a unique texture for each object? You bind one texture, draw it's object, then bind a new one and repeat. Similar for shaders. You run one shader, draw the object that uses it, then switch to a new shader (or no shader at all) and repeat. OpenGL is a state machine. This is how you do pretty much everything in it. At least until Longs Peak comes out.
I'd also like to add, you can use diferent shaders for each rendering pass. This would be how you get all the diffent lighting effects - point lights, directional lights and so on...
First off, thanks to everybody for their replies.

Quote:You run one shader, draw the object that uses it, then switch to a new shader (or no shader at all) and repeat.

So basically, I will have to render each object n times, n being equal to the number of lights plus shader effects attached to the object? For instance, n would be equal to 3 if in a scene there were two lights and a given model had a parallax bump mapping effect attached to it.

What is the purpose then of having the ability to attach several shader objects to one shader program?

I'm sorry if I'm being annoying but I want to understand this right before I go about writing generic code to support this beautiful feature! :)

Thanks again.

It's a flexibility offered by the API.
You can have pieces of your vertex shader in different shaders, compile them, attach them. Attach fragment shaders too.
Then link the entire thing to make a valid program object.
It might give a speed boost when you have many shaders to compile.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:V-Man wrote:
It's a flexibility offered by the API.
You can have pieces of your vertex shader in different shaders, compile them, attach them. Attach fragment shaders too.
Then link the entire thing to make a valid program object.
It might give a speed boost when you have many shaders to compile.

So you're saying that attaching multiple shader objects to one single shader program removes the need to render a given object multiple times?

I have another question. I have recently looked into an open source project (forgot its name) that essentially wraps the GLSL functionalities, in particular shader object and program creation, and noticed that it always creates shader objects in pairs requiring both a vertex and a fragment shader source.

My question is, are shaders always developed in pairs, ie a vertex and a fragment shader, or are there situations where a vertex or a fragment shader alone might prove enough? In the latter case, could someone provide me with an example so I understand it better?



PS: If I had a better book other than just the red book I wouldn't trouble you guys with these sort of questions. Googling this subject up doesn't dig out many useful resources as well. Thanks for all your patience and comments.
No, what I'm saying is that compiling shaders, attaching them, linking them costs CPU time. The driver has to do this job. Most people decide to do this at program startup and it can slow down startup. A few shaders doesn't cost much. You can measure it in milliseconds. I hear that some people have 20,000 shaders and can take up to 10 minutes.

For those guys this kind of thing might benefit them.

What you are talking about the need for multipass and that depends on your GPU capabilities and whatever you to achieve. Then you need to ask questions like how many vs instructions and fs instructions my GPU supports. etc etc.

Quote:My question is, are shaders always developed in pairs, ie a vertex and a fragment shader, or are there situations where a vertex or a fragment shader alone might prove enough? In the latter case, could someone provide me with an example so I understand it better?


Yes, it's possible to just have a vs or just fs but I suggest that you do both and just get used to the idea that shaders are the future. Actually, they are the present already and have been some years :)
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

This topic is closed to new replies.

Advertisement