Sign in to follow this  

GLSL Working on Nvidia & not on AMD

This topic is 1038 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have some simple shaders I made that are working on my Nvidia card but not on my friend's AMD card. They are compiling on his card but not rendering anything. I have heard AMD is stricter than Nvidia on sticking to the specifications but I have followed them as far as I can tell. Any suggestions?

Vertex shader:

#version 150

in vec4 in_Position;
in vec4 in_Color;
in vec2 in_TextureCoord;

uniform int width;
uniform int height;
uniform int xOffset;
uniform int yOffset;
uniform int zOffset;

out vec4 pass_Color;
out vec2 pass_TextureCoord;

void main(void) {
    gl_Position.z = in_Position.z + (float(zOffset) / 100.0f);
    gl_Position.w = in_Position.w;
    gl_Position.x = ((0.5f / float(width)) + ((in_Position.x + float(xOffset)) / float(width))) * 2.0f - 1.0f;
    gl_Position.y = ((0.5f / float(height)) + ((in_Position.y + float(yOffset)) / float(height))) * 2.0f - 1.0f;
    pass_Color = in_Color;
    pass_TextureCoord = in_TextureCoord;
}

Fragment shader:

#version 150

uniform sampler2D texture_diffuse;
uniform int final;

in vec4 pass_Color;
in vec2 pass_TextureCoord;

out vec4 out_Color;

void main(void) {
    out_Color = pass_Color;
    out_Color = texture(texture_diffuse, pass_TextureCoord);
    if(out_Color.w != 0.0f && final == 1){
        out_Color.w = 1.0f;
    }
}

Share this post


Link to post
Share on other sites
If you have verified the shader compiles and links without errors or warnings, consider using glValidateProgram to check the program state at the exact point where you would normally render.

Share this post


Link to post
Share on other sites
I have some simple shaders I made that are working on my Nvidia card but not on my friend's AMD card. They are compiling on his card but not rendering anything. I have heard AMD is stricter than Nvidia on sticking to the specifications but I have followed them as far as I can tell. Any suggestions?

Check if the shaders compile properly using the GLSL Reference Compiler. This is the GLSL compiler written by Khronos and is the "gold standard" for all GLSL compilers so it should tell you which vendor is handling the shaders incorrectly.

Edited by Xycaleth

Share this post


Link to post
Share on other sites

and try obvious problems as well.

 

for example in the pixel shader you have

void main(void) {
    out_Color = pass_Color;
    out_Color = texture(texture_diffuse, pass_TextureCoord);

Now you would expect any decent compiler to handle that, but I have had cases where things like that have broken code

 

I would change it to

void main(void) {
    out_Color = pass_Color;
    out_Color *= texture(texture_diffuse, pass_TextureCoord);

And see what happens

 

Then I would change the pixel shader to write to gl_FragColor and see what happens.

Edited by Stainless

Share this post


Link to post
Share on other sites
Check that your api call are correct, for instance not forgetting glEnableVertexAttrib, or wrong gl Uniform variant (float instead of int)

Share this post


Link to post
Share on other sites

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

 

But yeah, when each shader is compiled, check if it compiled correctly, print log otherwise. Then link the program and check if it was linked correctly, print log otherwise. You won't get anywhere if you don't start to do that.

 

Also, use ARB_debug_output/KHR_debug.

Share this post


Link to post
Share on other sites

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

Good spot on the 'f' suffixes :) That certainly isn't valid GLSL, even though some vendors accept it.

Share this post


Link to post
Share on other sites

 

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

Good spot on the 'f' suffixes smile.png That certainly isn't valid GLSL, even though some vendors accept it.

 

I read in places that it fixed some people's issues with AMD and that it differentiated between a float and a double? Although I see no references to a double in the GLSL reference sheet.

Just ran them both through the GLSL Reference Compiler and no issues. I will try the rest of the suggestions! Thanks for being so helpful!

Edited by FlexIron

Share this post


Link to post
Share on other sites
Using 'f' for floats should be legal from GLSL 1.2 onwards. If that were the problem you would see a clear compile error for the shader. You do remember to call glGetShader with GL_COMPILE_STATUS though? Failure to compile a shader will not set a normal GL error. The same goes for linking.

Have you verified your code catches actual shader compile or linker errors? If you did, it's most likely an issue with the current program state which glValidateProgram should be able to catch. NVidia unfortunately allows a lot of corner cases to just 'work and do something' even if the specification says otherwise...

Share this post


Link to post
Share on other sites

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

GLSL Version 1.3 supports f postfix.

 

Nevertheless, I would first check the following:

1. Driver ! Has your friend installed the newest driver. Many bugs and glitches I encountered on other PCs were solved by installing a new driver.

2. Hybrid GPU: double check if your friend is not using an integrated GPU. Automatic detection of using the dedicated GPU does not work very good with OpenGL.

3. Maybe it is not the shader which breaks the application, it could be any OGL related feature. Check the OGL error state frequently in your code (use defines or whatever to turn it off in release mode, otherwise it could kill your performance).

Edited by Ashaman73

Share this post


Link to post
Share on other sites

Update on the situation: I have made a small sample program with a small basic shader to test the problem and am running into the same issue.

Here is what the project renders as on my Nvidia GPU:fkz5jEU.png

And here is what in renders as on AMD GPUs:uRC2lFd.png

Here is the test project so you all can run it yourselves and look through the code to see what is up... I am beyond confused..

Edited by FlexIron

Share this post


Link to post
Share on other sites

 


And here is what in renders as on AMD GPUs

I'm not sure that has anything to do with your shader code.

 

It looks more like you are using GL_TRIANGLE_STRIP with the vertices in the wrong order for the second one.

 

The problem is those two screenshots are taken from the exact same compiled program just running on two different brands of GPUs

 

Are you trying to use GL_QUADS ?

No, I am using GL_TRIANGLE_STRIP.

 

 

Was skimming through these forums and noticed this thread: http://www.gamedev.net/topic/666609-nvidia-glsl-and-vertex-attribute-locations/

Turns out I was linking the program before binding the attribute locations. I am so happy to solve this issue thanks for your suggestions guys!

Edited by FlexIron

Share this post


Link to post
Share on other sites

I have heard AMD is stricter than Nvidia on sticking to the specifications

By phrasing it this way you are doing exactly what NVIDIA wants: Customers giving AMD a bad rap.
You should pick your words more carefully/accurately—AMD isn’t more strict, NVIDIA is more lax. For people to shine a negative light on AMD with phrasing such as, “They are more strict,” is exactly what NVIDIA wants, never mind how disrespectful it is towards us the consumers who are being played as NVIDIA’s puppets in this manner.

Most vendors, including AMD, stick closely to the standard because it helps customers and developers. Being lax as NVIDIA is is absolutely hurtful to customers and provides no benefits. Everyone has to make something that can run on AMD anyway, so all the whistles and bells NVIDIA adds never get used anyway, and meanwhile just confuse developers who end up having to waste time going back late-stage (rather than catching bugs early, which makes them easier to solve) to fix these and other kinds of issues.

So when comparing AMD’s and NVIDIA’s OpenGL support, it should always be stated in such a way as to cast NVIDIA in the more negative light, when applicable.


L. Spiro

Share this post


Link to post
Share on other sites


Customers giving AMD a bad rap.

I've never blamed ATI/AMD or Intel for the strict handling of GLSL syntax. For me, the negative reputation of ATI was a result of years of bad driver support. As hobby dev (aka a developer without hope of any direct support ) I just hated it to not beeing able to implement what I wanted, because the driver was not handling it correctly and there were no indication to when the bug will be fixed. Once burnt, twice shy, after several years I go over to just buy NVidia cards as development GPUs, even if the driver qualtiy is better and might be better than other vendors now.

In my mindset the reputation of AMD increases slowly, mantle being a strong argument, but ATI themself was responsible for a large portion of their bad reputation and a bad reputation will linger for quite some time.

Share this post


Link to post
Share on other sites
Sign in to follow this