Jump to content
  • Advertisement
Sign in to follow this  
FlexIron

GLSL Working on Nvidia & not on AMD

This topic is 1225 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have some simple shaders I made that are working on my Nvidia card but not on my friend's AMD card. They are compiling on his card but not rendering anything. I have heard AMD is stricter than Nvidia on sticking to the specifications but I have followed them as far as I can tell. Any suggestions?

Vertex shader:

#version 150

in vec4 in_Position;
in vec4 in_Color;
in vec2 in_TextureCoord;

uniform int width;
uniform int height;
uniform int xOffset;
uniform int yOffset;
uniform int zOffset;

out vec4 pass_Color;
out vec2 pass_TextureCoord;

void main(void) {
    gl_Position.z = in_Position.z + (float(zOffset) / 100.0f);
    gl_Position.w = in_Position.w;
    gl_Position.x = ((0.5f / float(width)) + ((in_Position.x + float(xOffset)) / float(width))) * 2.0f - 1.0f;
    gl_Position.y = ((0.5f / float(height)) + ((in_Position.y + float(yOffset)) / float(height))) * 2.0f - 1.0f;
    pass_Color = in_Color;
    pass_TextureCoord = in_TextureCoord;
}

Fragment shader:

#version 150

uniform sampler2D texture_diffuse;
uniform int final;

in vec4 pass_Color;
in vec2 pass_TextureCoord;

out vec4 out_Color;

void main(void) {
    out_Color = pass_Color;
    out_Color = texture(texture_diffuse, pass_TextureCoord);
    if(out_Color.w != 0.0f && final == 1){
        out_Color.w = 1.0f;
    }
}

Share this post


Link to post
Share on other sites
Advertisement
If you have verified the shader compiles and links without errors or warnings, consider using glValidateProgram to check the program state at the exact point where you would normally render.

Share this post


Link to post
Share on other sites
I have some simple shaders I made that are working on my Nvidia card but not on my friend's AMD card. They are compiling on his card but not rendering anything. I have heard AMD is stricter than Nvidia on sticking to the specifications but I have followed them as far as I can tell. Any suggestions?

Check if the shaders compile properly using the GLSL Reference Compiler. This is the GLSL compiler written by Khronos and is the "gold standard" for all GLSL compilers so it should tell you which vendor is handling the shaders incorrectly.

Edited by Xycaleth

Share this post


Link to post
Share on other sites

and try obvious problems as well.

 

for example in the pixel shader you have

void main(void) {
    out_Color = pass_Color;
    out_Color = texture(texture_diffuse, pass_TextureCoord);

Now you would expect any decent compiler to handle that, but I have had cases where things like that have broken code

 

I would change it to

void main(void) {
    out_Color = pass_Color;
    out_Color *= texture(texture_diffuse, pass_TextureCoord);

And see what happens

 

Then I would change the pixel shader to write to gl_FragColor and see what happens.

Edited by Stainless

Share this post


Link to post
Share on other sites
Check that your api call are correct, for instance not forgetting glEnableVertexAttrib, or wrong gl Uniform variant (float instead of int)

Share this post


Link to post
Share on other sites

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

 

But yeah, when each shader is compiled, check if it compiled correctly, print log otherwise. Then link the program and check if it was linked correctly, print log otherwise. You won't get anywhere if you don't start to do that.

 

Also, use ARB_debug_output/KHR_debug.

Share this post


Link to post
Share on other sites

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

Good spot on the 'f' suffixes :) That certainly isn't valid GLSL, even though some vendors accept it.

Share this post


Link to post
Share on other sites

 

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

Good spot on the 'f' suffixes smile.png That certainly isn't valid GLSL, even though some vendors accept it.

 

I read in places that it fixed some people's issues with AMD and that it differentiated between a float and a double? Although I see no references to a double in the GLSL reference sheet.

Just ran them both through the GLSL Reference Compiler and no issues. I will try the rest of the suggestions! Thanks for being so helpful!

Edited by FlexIron

Share this post


Link to post
Share on other sites
Using 'f' for floats should be legal from GLSL 1.2 onwards. If that were the problem you would see a clear compile error for the shader. You do remember to call glGetShader with GL_COMPILE_STATUS though? Failure to compile a shader will not set a normal GL error. The same goes for linking.

Have you verified your code catches actual shader compile or linker errors? If you did, it's most likely an issue with the current program state which glValidateProgram should be able to catch. NVidia unfortunately allows a lot of corner cases to just 'work and do something' even if the specification says otherwise...

Share this post


Link to post
Share on other sites


Using 'f' for floats should be legal from GLSL 1.2 onwards.

As far as I can tell, it's not supported at all on Mac or iPhone, so I'd advise avoiding it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!