[GLSL] NVIDIA vs ATI shader problem

Started by
16 comments, last by TheChubu 10 years ago

It gives a compiler error without specified #version directive. I decided on version 130 to support as much hardware as possible.

thank god!

Advertisement


thank god!
Praise the lord!

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

So does anyone have a clue why the line posted above would make a problem on ati but not on NV?

I tracked at least one of the issues down to this line in the vertex shader:


vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz);

If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.

Commenting out that line will also cause the vNormal, gNormalMatrix and inNormal variables to be optimized away.

Try replacing it with lines that use them in some other way, sugh as vNormal = inNormal; etc, and see if you can find other variations which are ok/broken.

Also, one other thing that might be relevant is how your vertex attributes are being supplied to the vertex shader from the CPU side..

Ok I finally solved it with the help of you guys!

The problem was in the attribute locations in the vertex shader. NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute. Also I switched to #version 330 otherwise those layout() identifiers are not available. The version change is no problem since i want to implement a deferred renderer in the next step anyway.

Again, thank you for your help!

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).

Depending on the version, GLSL requires you to put the f suffix on there. nVidia's GLSL compiler deliberately accepts invalid code, whereas AMD's strictly follows the specification. This is a good way for nVidia to use us developers as a marketing/FUD weapon, spreading rumours that AMD has bugs...

Seeing that GLSL code is compiled by your graphics driver, which may or may not be different to your user's driver, it's a good idea to run your code through an external tool to ensure it's valid, such as ShaderAnalyzer, glsl-optimizer, the reference compiler, etc...

It was during the time I used shaders for the very first time and 'I had no idea what I was doing' :). Back then I was using shader designer btw.

NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute.


If you don't specify the order (so no layout or pre-330) then you should query the data from the compiled/linked program as the driver is free to do as it likes.


If you don't specify the order (so no layout or pre-330) then you should query the data from the compiled/linked program as the driver is free to do as it likes.
This. No explicit layout == query all the positions from the application side after you make the program. Its the standard way to do that in OpenGL, regardless of what NV driver does.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement