Jump to content

  • Log In with Google      Sign In   
  • Create Account

[GLSL] NVIDIA vs ATI shader problem


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 IceBreaker23   Members   -  Reputation: 618

Like
0Likes
Like

Posted 28 March 2014 - 01:33 PM

Hello!

 

I am trying to get my engine running on ATI/AMD(previously i only ran it on NVIDIA).

 

My shader uses diffuse lighting to render a sphere and a plane(both with same material).

This works totally fine on NVIDIA graphics cards.

On ATI it compiles, but shows really weird results when executed.

The sphere appears to have some issue with fetching texture and the plane does not appear at all.

I also tried compiling it with AMDs Shader Analyzer and it did compile without errors or warnings.

 

Here is the vertex shader:

uniform mat4 gWVP;
uniform mat4 gNormalMatrix;
uniform mat4 gModelViewMatrix;

in vec3 inPosition;
in vec3 inNormal;
in vec2 inTexcoord;
in vec3 inTangent;

out vec2 vTexcoord;

//all of these are in eyespace
out vec3 vPosition;
out vec3 vNormal;
out vec3 vTangent;

void main()
{
	gl_Position = gWVP * vec4(inPosition,1.0);
	
	vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0)).xyz);
	vTangent = normalize((gNormalMatrix * vec4(inTangent,0.0)).xyz);

	vPosition = (gModelViewMatrix * vec4(inPosition,1.0)).xyz;

	vTexcoord = inTexcoord;
}

and fragment shader:

#version 130

const int LIGHT_TYPE_NONE = 0;
const int LIGHT_TYPE_DIRECTIONAL = 1;
const int LIGHT_TYPE_POINT = 2;

struct Light{
	int lightType;
	vec3 position;
	vec4 diffuse;
	float intensity;
	float constantAttenuation;
	float linearAttenuation;
	float quadraticAttenuation;
};

const int NUM_LIGHTS = 4;
uniform Light gLights[NUM_LIGHTS];
uniform vec4 gGlobalAmbient;

vec4 calculateDiffuse(Light light, vec4 surfaceColor, vec3 normal, vec3 lightDir)
{
	vec4 outColor = vec4(0.0);

	vec3 normalizedLightDir = normalize(lightDir);
	float NdotL = max(dot(normal,normalizedLightDir),0.0);

	if(light.lightType == LIGHT_TYPE_DIRECTIONAL)
	{
		if (NdotL > 0.0) {
			outColor += surfaceColor * light.diffuse * light.intensity * NdotL;
		}
	}else if(light.lightType == LIGHT_TYPE_POINT)
	{
		float dist = length(lightDir);
		if (NdotL > 0.0) {
			float attenuation = 1.0 / (light.constantAttenuation +
						light.linearAttenuation * dist +
						light.quadraticAttenuation * dist * dist);

			outColor += surfaceColor * light.diffuse * light.intensity * attenuation * NdotL;
		}
	}

	return outColor;
}

uniform sampler2D gMainTexture;

in vec2 vTexcoord;

in vec3 vPosition;
in vec3 vNormal;
in vec3 vTangent;

out vec4 outColor;

void main (void)  
{
	vec4 texDiffuse = texture(gMainTexture,vTexcoord);

	vec3 normal = normalize(vNormal);
	vec3 tangent = normalize(vTangent);
	vec3 bitangent = cross(normal, tangent);

	mat3 tangentSpaceMatrix = mat3(
		tangent.x, bitangent.x, normal.x,
		tangent.y, bitangent.y, normal.y,
		tangent.z, bitangent.z, normal.z
	);

	//ambient
	outColor = texDiffuse * gGlobalAmbient;

	for(int i = 0;i<NUM_LIGHTS;i++)
	{
		vec3 lightDir = vec3(0.0);
		if(gLights[i].lightType == LIGHT_TYPE_DIRECTIONAL)
		{
			lightDir = normalize(tangentSpaceMatrix * gLights[i].position);
		}else if(gLights[i].lightType == LIGHT_TYPE_POINT)
		{
			lightDir = tangentSpaceMatrix * (gLights[i].position - vPosition);
		}

		if(gLights[i].lightType != LIGHT_TYPE_NONE)
		{
			outColor += calculateDiffuse(gLights[i],texDiffuse,vec3(0.0,0.0,1.0),lightDir);
		}
	}
	
}    

I hope someone can point out the issue to me or help me find a solution to debug it(since its no compiler error).

 

Thanks in advance!



Sponsor:

#2 NumberXaero   Prime Members   -  Reputation: 1532

Like
0Likes
Like

Posted 28 March 2014 - 03:13 PM

Might try using the deprecated gl_FragColor if this is a 130 shader or try explicitly declaring it "out gl_FragColor;". If that doesnt work try varying for your shader in's, ive never had problem on amd or nv with varying keyword for 130 shaders.



#3 Tasaq   Members   -  Reputation: 1257

Like
0Likes
Like

Posted 28 March 2014 - 03:24 PM

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).



#4 Lodeman   Members   -  Reputation: 885

Like
0Likes
Like

Posted 28 March 2014 - 05:23 PM

I see you're using a for loop to dynamically get gLights[i].
With GLSL I've had problems in which a shader worked fine on my desktop PC, but on my laptop wasn't giving proper results.

 

My fix was to manually unroll the loop...very tedious but it solved the problem.
So instead of doing gLights[i] you'd do gLights[0], gLights[1] etc explicitely.

 

A general debugging tip by the way, is to comment out complex calculations and replace them with constant values you know should work. Do this until some form of logical graphical output appears, and work your way down from there until you pinpoint the issue.

 

Hope this helps.



#5 phantom   Moderators   -  Reputation: 7596

Like
1Likes
Like

Posted 28 March 2014 - 05:51 PM

Fix the errors the AMD programming reports; their GLSL implementation tends to be more correct/strict than NVs which allows more Cg like syntax through.

#6 Hodgman   Moderators   -  Reputation: 32016

Like
5Likes
Like

Posted 28 March 2014 - 07:19 PM

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).

Depending on the version, GLSL requires you to put the f suffix on there. nVidia's GLSL compiler deliberately accepts invalid code, whereas AMD's strictly follows the specification. This is a good way for nVidia to use us developers as a marketing/FUD weapon, spreading rumours that AMD has bugs...

 

Seeing that GLSL code is compiled by your graphics driver, which may or may not be different to your user's driver, it's a good idea to run your code through an external tool to ensure it's valid, such as ShaderAnalyzer, glsl-optimizer, the reference compiler, etc...



#7 Matias Goldberg   Crossbones+   -  Reputation: 3724

Like
1Likes
Like

Posted 28 March 2014 - 09:02 PM

spreading rumours that AMD has bugs...

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

#8 IceBreaker23   Members   -  Reputation: 618

Like
0Likes
Like

Posted 29 March 2014 - 12:02 PM

I tracked at least one of the issues down to this line in the vertex shader:

vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz);

If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.

 

Any idea what might cause this?

 

EDIT: I changed all constant float values to have an f after them.


Edited by IceBreaker23, 29 March 2014 - 12:10 PM.


#9 JohnnyCode   Members   -  Reputation: 339

Like
0Likes
Like

Posted 29 March 2014 - 01:00 PM

 

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

is there a need to specify GLSL version to compile shaders with? If there is an advanced opengl distrubution on a system, will deprecated lower versions of GLSL code not compile?



#10 IceBreaker23   Members   -  Reputation: 618

Like
1Likes
Like

Posted 29 March 2014 - 01:23 PM

It gives a compiler error without specified #version directive. I decided on version 130 to support as much hardware as possible.



#11 JohnnyCode   Members   -  Reputation: 339

Like
0Likes
Like

Posted 29 March 2014 - 07:15 PM

It gives a compiler error without specified #version directive. I decided on version 130 to support as much hardware as possible.

thank god!



#12 TheChubu   Crossbones+   -  Reputation: 4839

Like
0Likes
Like

Posted 29 March 2014 - 08:03 PM


thank god!
Praise the lord!

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#13 IceBreaker23   Members   -  Reputation: 618

Like
0Likes
Like

Posted 30 March 2014 - 01:51 AM

So does anyone have a clue why the line posted above would make a problem on ati but not on NV?

#14 Hodgman   Moderators   -  Reputation: 32016

Like
3Likes
Like

Posted 30 March 2014 - 02:16 AM

I tracked at least one of the issues down to this line in the vertex shader:

vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz);

If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.

Commenting out that line will also cause the vNormal, gNormalMatrix and inNormal variables to be optimized away.

Try replacing it with lines that use them in some other way, sugh as vNormal = inNormal; etc, and see if you can find other variations which are ok/broken.

 

Also, one other thing that might be relevant is how your vertex attributes are being supplied to the vertex shader from the CPU side..



#15 IceBreaker23   Members   -  Reputation: 618

Like
1Likes
Like

Posted 30 March 2014 - 03:47 AM

Ok I finally solved it with the help of you guys!

 

The problem was in the attribute locations in the vertex shader. NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute. Also I switched to #version 330 otherwise those layout() identifiers are not available. The version change is no problem since i want to implement a deferred renderer in the next step anyway.

 

Again, thank you for your help!



#16 Tasaq   Members   -  Reputation: 1257

Like
0Likes
Like

Posted 30 March 2014 - 04:18 AM

 

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).

Depending on the version, GLSL requires you to put the f suffix on there. nVidia's GLSL compiler deliberately accepts invalid code, whereas AMD's strictly follows the specification. This is a good way for nVidia to use us developers as a marketing/FUD weapon, spreading rumours that AMD has bugs...

 

Seeing that GLSL code is compiled by your graphics driver, which may or may not be different to your user's driver, it's a good idea to run your code through an external tool to ensure it's valid, such as ShaderAnalyzer, glsl-optimizer, the reference compiler, etc...

 

 

It was during the time I used shaders for the very first time and 'I had no idea what I was doing' :). Back then I was using shader designer btw.



#17 phantom   Moderators   -  Reputation: 7596

Like
2Likes
Like

Posted 30 March 2014 - 02:11 PM

NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute.


If you don't specify the order (so no layout or pre-330) then you should query the data from the compiled/linked program as the driver is free to do as it likes.

#18 TheChubu   Crossbones+   -  Reputation: 4839

Like
1Likes
Like

Posted 30 March 2014 - 03:33 PM


If you don't specify the order (so no layout or pre-330) then you should query the data from the compiled/linked program as the driver is free to do as it likes.
This. No explicit layout == query all the positions from the application side after you make the program. Its the standard way to do that in OpenGL, regardless of what NV driver does.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS