Vertex shader's failing to link...

Started by
5 comments, last by BitMaster 11 years, 8 months ago
I added the sampler1D colorTexture, the color offset and tmp calculations and changed the color from color = vColor. I'm simply not seeing why this wouldn't be linking. If I change color = tmp.rgba; to color = vColor; it links appropriately.

[source lang="cpp"]GLbyte TerrainVertexShaderStr[] =
"uniform sampler1D colorTexture; \n"
"in vec4 vPosition; \n" // Vertex Position
"in vec3 vNormal; \n" // Normal
"in vec4 vColor; \n" // The RGBA color
"in vec2 vTexCoord; \n" // Texture Coordinate
"out vec2 texCrd; \n"
"out vec4 color; \n"
"void main () { \n"
" texCrd = vTexCoord; \n"
" float color_offset; \n"
" vec4 tmp; \n"
" color_offset = (gl_Vertex.y + 400.0f) / 5120.0f; \n"
" tmp = texture1D(colorTexture, color_offset); \n"
" color = tmp.rgba; \n"
" gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; \n"
"}";

GLbyte TerrainFragmentShaderStr[] =
"uniform sampler2D grndTexture; \n"
"in vec4 color; \n"
"in vec2 texCrd; \n"
"void main () { \n"
" vec4 tcolor = texture(grndTexture, texCrd); \n"
" gl_FragColor = color * tcolor; \n"
"}";[/source]
Advertisement
Erm, you forgot to tell what is the reported link error!

Code looks fine to me - bad drivers? Might confuse the driver with the no-op "rgba" swizzle/selector.
There's no need for the use of tmp at all here - in fact the compiler should optimize it out (if it's doing it's job right). Maybe try "color = texture1D (colorTexture, color_offset);" instead?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

GLSL does not use an 'f' after floating point constants. Any floating point is considered 32 bits, unless specified as a half. This causes a compilation error.
The reason you are not getting the error when using color = vColor; is because the compiler optimizes out your entire color_offset and texture1d statements.
Actually "f" as a suffix is allowed, it's just not needed in most cases.

That aside, you should be calling glGetShaderInfoLog to retrieve the actual error. It will most likely be something extremely simple once you see the error messages from the driver.

Actually "f" as a suffix is allowed, it's just not needed in most cases.


Depending on your GL_VERSION and/or driver it may or may not be. The original GLSL specs explicitly didn't allow it and it should cause a compile to fail; NVIDIA have always accepted it.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Interesting. I'm pretty sure I sometimes mistyped float values with "f" on my hobby project and it compiled on an AMD card (that would be #version 330 then though). I tried finding which version started allowing it but could not find anything useful in a hurry. The Wiki mentions it explicitly for turning integer constants into floats but unfortunately does not make a clear distinction regarding the version.

While I certainly appreciate the correction, the core issue still stands: use glGetShaderInfoLog, it tells you everything you need to know about your errors without all the guesswork.

This topic is closed to new replies.

Advertisement