GLSL not behaving the same between graphics cards

Started by
8 comments, last by TheChubu 8 years, 5 months ago

I have the following GLSL shader that is working on a Mac computer with an NVIDIA GeForce GT 330M, a different Mac computer with an ATI Radeon HD 5750, an Ubuntu VM inside this second Mac, but not on an Ubuntu VM inside a Windows machine with a GeForce GTX 780 (all drivers up to date). The shader is pretty basic, so I'm looking for some help what might be wrong! The vertex shader looks like (I'm using the cocos2d-x game engine, which is where all of the CC_{x} variables are defined):


varying vec4 v_fragmentColor;
void main() {
    gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
    gl_PointSize = CC_MVMatrix[0][0] * u_size * 1.5f;
    v_fragmentColor = vec4(1, 1, 1, 1);
}

And the fragment shader:


varying vec4 v_fragmentColor;

void main() {
   gl_FragColor = texture2D(CC_Texture0, gl_PointCoord) * v_fragmentColor; // Can't see anything
   // gl_FragColor = texture2D(CC_Texture0, gl_PointCoord); // Produces the texture as expected, no problems!
   // gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!
}

As you can see, I'm getting very strange behavior where both the sampler, CC_Texture0, and the varying vec4, v_fragmentColor, seem to be working properly, but multiplying them causes problems. I'm reasonably confident everything else is set up right because I'm seeing it work properly on the other systems, so it seems to be related to the graphics card or some undefined behavior that I'm not aware of? Also, I'm using #version 120 ( which was needed for gl_PointCoord). Thanks for any help!

Advertisement

Try compiling your shaders with the reference compiler to see if they comply with the spec.

I see that "gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!"

I wouldn't be so sure about that. Try a better colour, like (0.6, 0.4, 0.2, 1.0) (orange).

If something goes wrong the driver may fallback to an all white or an all black pixel shader.

Also, needless to say running behind a VM with OpenGL is far from optimal (rather... extremely suboptimal).

Another problem is that this isn't the code you're executing because from what you're saying cocos-2d is adding more code. It's possible these additions are causing the shader to be non-standard compliant.

For example some ancient code we had in our OGRE engine added code before the user declared #extension; but by spec #extensions must be declared before anything else. We never noticed until Mesa SW implementation (also the Mesa Intel drivers) started complaining because it was the only OpenGL implementation that actually enforced it, the others silently accepted the faulty shader.

Thanks for the replys, I really appreciate it. This problem has been frustrating me for a while now. Just to make things clearer, here is the entire vertex shader after preprocessing by cocos2d-x:


#version 120
uniform mat4 CC_PMatrix;
uniform mat4 CC_MVMatrix;
uniform mat4 CC_MVPMatrix;
uniform mat3 CC_NormalMatrix;
uniform vec4 CC_Time;
uniform vec4 CC_SinTime;
uniform vec4 CC_CosTime;
uniform vec4 CC_Random01;
uniform sampler2D CC_Texture0;
uniform sampler2D CC_Texture1;
uniform sampler2D CC_Texture2;
uniform sampler2D CC_Texture3;
//CC INCLUDES END
 
 
    attribute vec4 a_position;
    uniform float u_size;
 
#ifdef GL_ES
    varying lowp vec4 v_fragmentColor;
#else
    varying vec4 v_fragmentColor;
#endif
 
    void main() {
        gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
        gl_PointSize = CC_MVMatrix[0][0] * u_size * 1.5;
        v_fragmentColor = vec4(1, 1, 1, 1);
    }

And the fragment shader:


#version 120
uniform mat4 CC_PMatrix;
uniform mat4 CC_MVMatrix;
uniform mat4 CC_MVPMatrix;
uniform mat3 CC_NormalMatrix;
uniform vec4 CC_Time;
uniform vec4 CC_SinTime;
uniform vec4 CC_CosTime;
uniform vec4 CC_Random01;
uniform sampler2D CC_Texture0;
uniform sampler2D CC_Texture1;
uniform sampler2D CC_Texture2;
uniform sampler2D CC_Texture3;
//CC INCLUDES END
 
 
    #ifdef GL_ES
    precision lowp float;
    #endif
 
    varying vec4 v_fragmentColor;
 
    void main() {
        gl_FragColor = v_fragmentColor * texture2D(CC_Texture0, gl_PointCoord);
        // gl_FragColor = texture2D(CC_Texture0, gl_PointCoord); // Produces the texture as expected, no problems!
        // gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!
    }

Now to your responses:

Try compiling your shaders with the reference compiler to see if they comply with the spec.

Thanks for showing me this, I had never seen it before. I tried running it on the above shaders and it didn't seem to find anything.

I see that "gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!"

I wouldn't be so sure about that. Try a better colour, like (0.6, 0.4, 0.2, 1.0) (orange).

If something goes wrong the driver may fallback to an all white or an all black pixel shader.

Also, needless to say running behind a VM with OpenGL is far from optimal (rather... extremely suboptimal).

Another problem is that this isn't the code you're executing because from what you're saying cocos-2d is adding more code. It's possible these additions are causing the shader to be non-standard compliant.

For example some ancient code we had in our OGRE engine added code before the user declared #extension; but by spec #extensions must be declared before anything else. We never noticed until Mesa SW implementation (also the Mesa Intel drivers) started complaining because it was the only OpenGL implementation that actually enforced it, the others silently accepted the faulty shader.

I tried (0.6, 0.4, 0.2, 1.0) instead of white and it indeed produces an orange box. So if my shader is compliant and I've gotten it working inside one VM (albeit with a different graphics card), do I have any other options to get it working on this one specific platform?

Thanks again for responses.

Do you check for errors when compiling and linking the shaders? Is it failing to link, but you just ignore the failure and try and render with it anyway?

Do you check for errors when compiling and linking the shaders? Is it failing to link, but you just ignore the failure and try and render with it anyway?

I should have mentioned this in the original post, sorry. I'm not getting any errors while compiling or linking.

but not on an Ubuntu VM inside a Windows machine with a GeForce GTX 780 (all drivers up to date). how do you boot ubuntu inside windows machine what program do you use, what is the desired result and wrong result

with multiplication error its liekly that you get Not A Number NAN.

but i think its more virtual machine error that doesnt support fully glsl rather than shader, but you could define mediump for varying color

gl_PointCoord is a fragment language input variable that contains the two-dimensional coordinates indicating where within a point primitive the current fragment is located. If the current primitive is not a point, then values read from gl_PointCoord are undefined.

i think you should stop using gl_PointCoord and set your own attribute vec2;

Do you check for errors when compiling and linking the shaders? Is it failing to link, but you just ignore the failure and try and render with it anyway?

I should have mentioned this in the original post, sorry. I'm not getting any errors while compiling or linking.

Do you use glGetError, glGetProgramInfoLog and glGetShaderInfoLog. Shader errors do not always show in glGetError. You can also try to use glValidateProgram. If none of these show anything then we can consider a driver bug.

Try frag = texture*.5 + vertexColor*.5

What result do you get there?

I doubt this is the case but being that your interpolating a constant for all vertices, there really is no interpolation happening. Maybe if you changed that from a varying to a uniform it might behave better, since at this point it is basically just a uniform or a constant.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Since that one time I had issues with an AMD card, I always use swizzle operators to access textures, ie:

vec3 lalala = texture2D(tex, coord).xyz;
vec4 lelele = texture2D(tex, coord).xyzw;

And so on... (although the AMD GLSL compiler warned me about it).

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement