GLSL: how to define input and output properly

Started by
30 comments, last by Spikx 14 years, 6 months ago
Well, specifying structs with 'in' is legal at least. But I guess I will try without structs first.

There are still a few things I don't understand. How does the shader even know what to rasterize when using attribute arrays? I mean how does the shader/the gpu know what attribute are the vertices of the triangles that have to be rasterized when using attribute arrays. It's also very unclear to me, how to use attribute arrays with vertex buffer objects, it seems there are almost no examples for that anywere :/ (but I will open up another thread for that).
Advertisement
Quote:Original post by Spikx
There are still a few things I don't understand. How does the shader even know what to rasterize when using attribute arrays? I mean how does the shader/the gpu know what attribute are the vertices of the triangles that have to be rasterized when using attribute arrays. It's also very unclear to me, how to use attribute arrays with vertex buffer objects, it seems there are almost no examples for that anywere :/ (but I will open up another thread for that).


It's up to you to decide, it can be whichever attribute you want. The rasterized vertices are those passed to gl_Position in the vertex shader. Whether those even correspond to an input array is entirely up to you.
You make a VBO into an attribute with glVertexAttribPointer, where the first argument is the attribute ID number the currently bound array buffer should correspond to. Then when you bind an input to that ID with glBindAttribLocation, you make that attrib be fetched from that VBO. You can bind the same VBO to several attributes, by just using glVertexAttribPointer several times. It includes a 'stride' and 'offset' argument which will let you create interleaved arrays with several attributes.
Quote:Original post by Spikx
There are still a few things I don't understand. How does the shader even know what to rasterize when using attribute arrays? I mean how does the shader/the gpu know what attribute are the vertices of the triangles that have to be rasterized when using attribute arrays. It's also very unclear to me, how to use attribute arrays with vertex buffer objects, it seems there are almost no examples for that anywere :/ (but I will open up another thread for that).


Shaders don't rasterize. Let's explain a bit the GL pipeline:

Vertex attributes are the input of the vertex shader, which does coordinate transformation, and outputs any number of varying variables for interpolation, and write to a special variable (gl_Position) the transformed screen-space vertex position.

This position is used by the rasterization stage to produce fragments, and they associated interpolated varying values. The fragment shader runs once for each of those fragments, receiving as input the interpolated varying variables and some predefined variables, and it ouputs the fragment color through gl_FragColor.
Quote:Original post by Erik Rufelt
It's up to you to decide, it can be whichever attribute you want. The rasterized vertices are those passed to gl_Position in the vertex shader. Whether those even correspond to an input array is entirely up to you.
Oh ok, so writing to gl_Position is still necessary. I forgot that this is the equivalent of writing to a variable that is bound to : POSITION in Cg.
I still have trouble getting the location indexes for my attributes. I have defined these in my vertex shader for example

in  vec4 a2v_position;in  vec3 a2v_normal;in  vec3 a2v_tangent;in  vec3 a2v_bitangent;in  vec2 a2v_texcoords;

And resolved their location indices with

	loc_position = glGetAttribLocation(shader_program, "a2v_position");	loc_normal = glGetAttribLocation(shader_program, "a2v_normal");	loc_tangent = glGetAttribLocation(shader_program, "a2v_tangent");	loc_bitangent = glGetAttribLocation(shader_program, "a2v_bitangent");	loc_texcoords = glGetAttribLocation(shader_program, "a2v_texcoords");

However, no valid index is returned by glGetAttribLocation and no error shows up on the shader and program logs and via GLintercept.
Are you really using all those attributes in your shader? If not, the compiler will optimize them out.
Quote:Original post by HuntsMan
Are you really using all those attributes in your shader? If not, the compiler will optimize them out.
*d'Oh* :D was testing on an almost empty shader.
Sorry for the double post, but it seems I have done something wrong again, at least according to NVidia. While my shader works fine now on ATi, on NVidia I get this error:

0(1) : error C5060: out can't be used with non-varying v2f_normal0(1) : error C5060: out can't be used with non-varying v2f_ambient0(1) : error C5060: out can't be used with non-varying v2f_diffuse0(1) : error C5060: out can't be used with non-varying v2f_halfVector0(1) : error C5060: out can't be used with non-varying v2f_lightDir

Those variables are the output variables I defined in the beginning of my vertex shader

out vec3 v2f_normal;out vec4 v2f_ambient;out vec4 v2f_diffuse;out vec3 v2f_halfVector;out vec3 v2f_lightDir;

And they correspond to these input variables in the fragment shader

in vec3 v2f_normal;in vec4 v2f_ambient;in vec4 v2f_diffuse;in vec3 v2f_halfVector;in vec3 v2f_lightDir;

And (supposedly) form an interface, according to the GLSL specifications

Quote:If a geometry shader is not present in a program, but a vertex and fragment shader are present, then the output of the vertex shader and the input of the fragment shader form an interface. For this interface, vertex shader output variables and fragment shader input variables of the same name must match in type and qualification (other than out matching to in).
You should really decide on a set OpenGL version and use that. Do you want to use GL3 without deprecated features, or do you want to use an older version?
Or GL3 in compatibility mode, so you can mix?

You can use #version 150 core at the top of your shaders to force the new version only. This should also make the compilation fail if you run it on a computer where the drivers don't support that version, so you know if that's the problem.
Quote:Original post by Erik Rufelt
You should really decide on a set OpenGL version and use that. Do you want to use GL3 without deprecated features, or do you want to use an older version?
Or GL3 in compatibility mode, so you can mix?
In the end, the goal is to have a proper GL3 context, since we also want to use the matrix inverse function in our shader, which is only available since 1.40, which requires a GL3 context on the other hand. But generally we want to mix.


Quote:Original post by Erik RufeltYou can use #version 150 core at the top of your shaders to force the new version only. This should also make the compilation fail if you run it on a computer where the drivers don't support that version, so you know if that's the problem.
Well, I need to have a GL3 context, before I can use anything above #version 130. However, apart from that, I do have another weird problem. Whenever I use any preprocessor statements, I get weird syntax errors. For instance, if I use #version 130 in the beginning of my shader file, I get this error on ATi:

Vertex shader failed to compile with the following errors:ERROR: 0:1: error(#76) Syntax error unexpected tokens following #versionERROR: error(#273) 1 compilation errors.  No code generated

And this error on NVidia

0(1) : error C0129: invalid char 'i' in integer constant suffix0(1) : error C0129: invalid char 'n' in integer constant suffix(0) : error C0000: syntax error, unexpected $end at token "<EOF>"(0) : error C0501: type name expected at token "<invalid atom -1>"

This topic is closed to new replies.

Advertisement