GLSL: how to define input and output properly

Started by
30 comments, last by Spikx 14 years, 6 months ago
Quote:in my shader, foo1 is assigned to texture unit 0 (GL_TEXTURE0) and foo2 is assigned to texture unit 1 (GL_TEXTURE1). Is that correct or is the assignment decided differently?

No. You have to set the sampler uniform using glUniform1i to the texture unit number. It's not automatically assigned.
Advertisement
Quote:Original post by Erik Rufelt
You specify the parameter ID with TEXCOORD0. Cg turns that into the correct number under the hood. Which OpenGL version do you use?
You can use those gl_TexCoord[0] etc. if you don't want to bother with it, unless you compile for GL3 without the compatibility bit set.

To use GLSL efficiently and cleanly you would implement your own "mini-cg" that tweaks all that to your own liking, and if you want you can write that so it will let you get away with not using extra code. It will obviously be more work than just going with the old version, until you get it set up, but if you do it to learn GLSL I would recommend going with GL3 anyway. Once you have it set up I think GL3 is better.
In GLSL the shader program contains both the fragment and vertex shader, and it's regarded as the same shader, so the variable name is the semantic. So if you have a variable in the vertex shader and in the fragment shader called 'lightDir' then it will be the same variable. It's bound to the name of the variable instead of the semantic 'TEXCOORD0' for example.

Ok then :). Another quick question: if I want to define the input and output as structs, just as in Cg, how can I access them from the application in order to set them?


Quote:Original post by Momoko_Fan
Quote:in my shader, foo1 is assigned to texture unit 0 (GL_TEXTURE0) and foo2 is assigned to texture unit 1 (GL_TEXTURE1). Is that correct or is the assignment decided differently?

No. You have to set the sampler uniform using glUniform1i to the texture unit number. It's not automatically assigned.
Well, in my test application I only used
glActiveTextureARB(GL_TEXTURE0_ARB);glBindTexture(GL_TEXTURE_2D, tex);
and nothing else and this texture was available in the shader.
I'll start explaining from scratch.
In GLSL, the frag and vert shaders are linked together. It's the linker's task to assign which HW slots are used for inputs, uniform-locations and varyings.
Now, with glBindAttribLoc you can force the linker to use a specific attrib-slot for vtx-attribs. (you must do this _before_ glLinkProgram)
With glUniform1i you specify which texture-unit slot a given texture is used. This is done post-link, but it's also kind of a linking step.
See, so far you can have your code not keep info which was put where - you can enforce those things.

But uniform constants... you must fetch their location. And the returned-value is not a location like 0 for "C0" or 17 for "C17", but an ID. ID of active-uniform. The same uniform can be in both the frag-shader and vert-shader, it's glUniformXXX's task to know and upload in both places in that case.

Then, there's the hellish thing that you can't use glUniform4fv for a mat4x4. You must always use the specific glUniformXXX call to upload to the specific type of constant-data. (or there'll be an error generated). This is really nasty: not only do you have to upload each and every uniform with different calls, but your uniforms will be many too :) .

There's a way to go around it:

	//----[ VERTEX UNIFORMS ]------------------------[	struct VVV_FORMAT{ // use this struct in your C++ part, too		mat4 MVP;		mat4 MV;		mat4 P;		vec4 ambientColor;		vec4 diffuseColor;				vec3 norm1;						float padding[5]; // gotta align to 64-byte	};	uniform mat4 vvv[3];	#define u_MVP vvv[0]	#define u_MV  vvv[1]	#define u_P   vvv[2]	#define u_ambientColor vvv[3][0]	#define u_diffuseColor vvv[3][1]	#define u_norm1  vvv[3][2].xyz	//-----------------------------------------------/



	//----[ FRAGMENT UNIFORMS ]------------------------[	struct FFF_FORMAT{		vec4 color0;		vec3 EyeVector; float pad0;	};	uniform vec4 fff[2];	#define u_color0 fff[0]	#define u_EyeVector fff[1].xyz	//-------------------------------------------------/


In your C++ side, you simply keep track of 2 base uniforms' locations: vvv and fff. And of course keep the VVV_FORMAT and FFF_FORMAT structs known and coherent. You upload to the vvv via glUniformMatrix4v() and to the fff via glUniform4fv(). (of course, you can make fff be a mat4 array, too - it'll just be hard to do some loops with manual trick-unpacking in your lighting later)


The only way around this is to use UBOs and the like, with structs packed in layout(std140) uniform { ...... } myUni1;
Ah.. ok, thx :). But uniforms aside, back to my original question: how to I properly define my inputs and outputs? Assuming I use glVertexAttribArrays etc.
I was thinking of doing something like
struct a2v{	vec4 position;	vec3 normal;	vec3 tangent;	vec3 bitangent;	vec2 texcoords;};struct v2f{	vec4 vpos;	vec4 color;	vec3 normal;	vec3 tangent;	vec3 bitangent;	vec2 texcoords;	vec3 view;};in  a2v IN;out v2f OUT;
In the vertex shader for example. But this way I can't access the position, normal, etc. variables and get a location for them for glVertexAttribPointer...
Those symbols are now known individually to glBindAttribLoc and co. as "IN.position" , "IN.normal", ... ,"IN.texcoords" . You can't get access to the whole "IN" object.

Only UBOs and co. treat structs as structs.
Quote:Original post by idinev
Those symbols are now known individually to glBindAttribLoc and co. as "IN.position" , "IN.normal", ... ,"IN.texcoords" . You can't get access to the whole "IN" object.
Ah, yes, I actually already tried that. Though with glGetAttribLocation
	loc_position = glGetAttribLocation(shader_program, "IN.position");	loc_normal = glGetAttribLocation(shader_program, "IN.normal");
etc., but I got an invalid location back.
You have several options:
1. add metadata that describe your vertex structure format. This metadata can either be stored directly in your code or stored as an external resource (typically xml).
2. use introspection to examine the vertex structure and bind the correct locations automatically at runtime.
3. use a fixed attribute assignment, where e.g. "position" is always bound to slot 0, "normal" to slot 1 and so on.

#1 is very similar to the D3D approach, where you describe the vertex format manually.

#2 is the same concept taken one step further: you do not even need to describe the vertex format, all necessary information is available at runtime. This is trivial to implement in modern languages like C#, but much less so in C++.

#3 is the equivalent to using built-in attributes (like gl_Position, gl_Normal), only you get to choose the names. This the simplest, and least flexible, option.

For #1 and #2 you link your shader and query the names(*), locations and datatypes of the vertex attributes. You then examine your vertex format (either using your metadata or through introspection) and bind the correct fields to the correct locations.

For #3 you simply define your fixed locations prior to linking the program. As long as your GLSL programs and your vertex structures are consistent with this definition, everything will work smoothly.

(*) the safest approach is to query the attribute names and then use those names to query their locations and datatypes. Don't forget you have to link the program first!

Edit: "IN.position" is only valid if your GLSL program defines a structure named "IN" that contains a "position" field. If you use a plain attribute, you'll need to drop the "IN." part. If your structure uses a different name, you'll need to replace "IN" by its name.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

I guess I will go with option #3 :). But the question is, can I use the structs as I described above
struct a2v{	vec4 position;	vec3 normal;	vec3 tangent;	vec3 bitangent;	vec2 texcoords;};struct v2f{	vec4 vpos;	vec4 color;	vec3 normal;	vec3 tangent;	vec3 bitangent;	vec2 texcoords;	vec3 view;};in  a2v IN;out v2f OUT;
or do I have to rewrite it to
in vec4 a2v_position;in vec3 a2v_normal;in vec3 a2v_tangent;in vec3 a2v_bitangent;in vec2 a2v_texcoords;out vec4 v2f_vpos;out vec4 v2f_color;out vec3 v2f_normal;out vec3 v2f_tangent;out vec3 v2f_bitangent;out vec2 v2f_texcoords;out vec3 v2f_view;
for the vertex shader and in the fragment shader I define
in vec4 v2f_vpos;in vec4 v2f_color;in vec3 v2f_normal;in vec3 v2f_tangent;in vec3 v2f_bitangent;in vec2 v2f_texcoords;in vec3 v2f_view;
?
Quote:Original post by Fiddler
Edit: "IN.position" is only valid if your GLSL program defines a structure named "IN" that contains a "position" field. If you use a plain attribute, you'll need to drop the "IN." part. If your structure uses a different name, you'll need to replace "IN" by its name.
That's what I had. But I still got an invalid location back.
Truth is, I've never used structs for attributes - only for uniforms - so I don't know if they are supported in this case. I think they are, but you should verify this with the glsl specs.

Edit: did your shaders compile and link correctly? If so, then these structs are supported. Use glGetActiveAttribName to retrieve the names of your attributes.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

This topic is closed to new replies.

Advertisement