• Advertisement
Sign in to follow this  

[GLSL] NVIDIA vs ATI shader problem

This topic is 1393 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello!

 

I am trying to get my engine running on ATI/AMD(previously i only ran it on NVIDIA).

 

My shader uses diffuse lighting to render a sphere and a plane(both with same material).

This works totally fine on NVIDIA graphics cards.

On ATI it compiles, but shows really weird results when executed.

The sphere appears to have some issue with fetching texture and the plane does not appear at all.

I also tried compiling it with AMDs Shader Analyzer and it did compile without errors or warnings.

 

Here is the vertex shader:

uniform mat4 gWVP;
uniform mat4 gNormalMatrix;
uniform mat4 gModelViewMatrix;

in vec3 inPosition;
in vec3 inNormal;
in vec2 inTexcoord;
in vec3 inTangent;

out vec2 vTexcoord;

//all of these are in eyespace
out vec3 vPosition;
out vec3 vNormal;
out vec3 vTangent;

void main()
{
	gl_Position = gWVP * vec4(inPosition,1.0);
	
	vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0)).xyz);
	vTangent = normalize((gNormalMatrix * vec4(inTangent,0.0)).xyz);

	vPosition = (gModelViewMatrix * vec4(inPosition,1.0)).xyz;

	vTexcoord = inTexcoord;
}

and fragment shader:

#version 130

const int LIGHT_TYPE_NONE = 0;
const int LIGHT_TYPE_DIRECTIONAL = 1;
const int LIGHT_TYPE_POINT = 2;

struct Light{
	int lightType;
	vec3 position;
	vec4 diffuse;
	float intensity;
	float constantAttenuation;
	float linearAttenuation;
	float quadraticAttenuation;
};

const int NUM_LIGHTS = 4;
uniform Light gLights[NUM_LIGHTS];
uniform vec4 gGlobalAmbient;

vec4 calculateDiffuse(Light light, vec4 surfaceColor, vec3 normal, vec3 lightDir)
{
	vec4 outColor = vec4(0.0);

	vec3 normalizedLightDir = normalize(lightDir);
	float NdotL = max(dot(normal,normalizedLightDir),0.0);

	if(light.lightType == LIGHT_TYPE_DIRECTIONAL)
	{
		if (NdotL > 0.0) {
			outColor += surfaceColor * light.diffuse * light.intensity * NdotL;
		}
	}else if(light.lightType == LIGHT_TYPE_POINT)
	{
		float dist = length(lightDir);
		if (NdotL > 0.0) {
			float attenuation = 1.0 / (light.constantAttenuation +
						light.linearAttenuation * dist +
						light.quadraticAttenuation * dist * dist);

			outColor += surfaceColor * light.diffuse * light.intensity * attenuation * NdotL;
		}
	}

	return outColor;
}

uniform sampler2D gMainTexture;

in vec2 vTexcoord;

in vec3 vPosition;
in vec3 vNormal;
in vec3 vTangent;

out vec4 outColor;

void main (void)  
{
	vec4 texDiffuse = texture(gMainTexture,vTexcoord);

	vec3 normal = normalize(vNormal);
	vec3 tangent = normalize(vTangent);
	vec3 bitangent = cross(normal, tangent);

	mat3 tangentSpaceMatrix = mat3(
		tangent.x, bitangent.x, normal.x,
		tangent.y, bitangent.y, normal.y,
		tangent.z, bitangent.z, normal.z
	);

	//ambient
	outColor = texDiffuse * gGlobalAmbient;

	for(int i = 0;i<NUM_LIGHTS;i++)
	{
		vec3 lightDir = vec3(0.0);
		if(gLights[i].lightType == LIGHT_TYPE_DIRECTIONAL)
		{
			lightDir = normalize(tangentSpaceMatrix * gLights[i].position);
		}else if(gLights[i].lightType == LIGHT_TYPE_POINT)
		{
			lightDir = tangentSpaceMatrix * (gLights[i].position - vPosition);
		}

		if(gLights[i].lightType != LIGHT_TYPE_NONE)
		{
			outColor += calculateDiffuse(gLights[i],texDiffuse,vec3(0.0,0.0,1.0),lightDir);
		}
	}
	
}    

I hope someone can point out the issue to me or help me find a solution to debug it(since its no compiler error).

 

Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement

Might try using the deprecated gl_FragColor if this is a 130 shader or try explicitly declaring it "out gl_FragColor;". If that doesnt work try varying for your shader in's, ive never had problem on amd or nv with varying keyword for 130 shaders.

Share this post


Link to post
Share on other sites

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).

Share this post


Link to post
Share on other sites

I see you're using a for loop to dynamically get gLights.
With GLSL I've had problems in which a shader worked fine on my desktop PC, but on my laptop wasn't giving proper results.

 

My fix was to manually unroll the loop...very tedious but it solved the problem.
So instead of doing gLights you'd do gLights[0], gLights[1] etc explicitely.

 

A general debugging tip by the way, is to comment out complex calculations and replace them with constant values you know should work. Do this until some form of logical graphical output appears, and work your way down from there until you pinpoint the issue.

 

Hope this helps.

Share this post


Link to post
Share on other sites

spreading rumours that AMD has bugs...

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

Share this post


Link to post
Share on other sites

I tracked at least one of the issues down to this line in the vertex shader:

vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz);

If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.

 

Any idea what might cause this?

 

EDIT: I changed all constant float values to have an f after them.

Edited by IceBreaker23

Share this post


Link to post
Share on other sites

 

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

is there a need to specify GLSL version to compile shaders with? If there is an advanced opengl distrubution on a system, will deprecated lower versions of GLSL code not compile?

Share this post


Link to post
Share on other sites

It gives a compiler error without specified #version directive. I decided on version 130 to support as much hardware as possible.

Share this post


Link to post
Share on other sites

It gives a compiler error without specified #version directive. I decided on version 130 to support as much hardware as possible.

thank god!

Share this post


Link to post
Share on other sites

I tracked at least one of the issues down to this line in the vertex shader:

vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz);

If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.

Commenting out that line will also cause the vNormal, gNormalMatrix and inNormal variables to be optimized away.

Try replacing it with lines that use them in some other way, sugh as vNormal = inNormal; etc, and see if you can find other variations which are ok/broken.

 

Also, one other thing that might be relevant is how your vertex attributes are being supplied to the vertex shader from the CPU side..

Share this post


Link to post
Share on other sites

Ok I finally solved it with the help of you guys!

 

The problem was in the attribute locations in the vertex shader. NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute. Also I switched to #version 330 otherwise those layout() identifiers are not available. The version change is no problem since i want to implement a deferred renderer in the next step anyway.

 

Again, thank you for your help!

Share this post


Link to post
Share on other sites

 

This is very annoying thing when it comes to shaders :\ I remeber in my early days when I tested my application on Nvidia card everything was ok, and I had problems with AMD. What solved my problems was changing doubles to floats (i.e. changing 0.0 to 0.0f).

Depending on the version, GLSL requires you to put the f suffix on there. nVidia's GLSL compiler deliberately accepts invalid code, whereas AMD's strictly follows the specification. This is a good way for nVidia to use us developers as a marketing/FUD weapon, spreading rumours that AMD has bugs...

 

Seeing that GLSL code is compiled by your graphics driver, which may or may not be different to your user's driver, it's a good idea to run your code through an external tool to ensure it's valid, such as ShaderAnalyzer, glsl-optimizer, the reference compiler, etc...

 

 

It was during the time I used shaders for the very first time and 'I had no idea what I was doing' :). Back then I was using shader designer btw.

Share this post


Link to post
Share on other sites

NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute.


If you don't specify the order (so no layout or pre-330) then you should query the data from the compiled/linked program as the driver is free to do as it likes.

Share this post


Link to post
Share on other sites


If you don't specify the order (so no layout or pre-330) then you should query the data from the compiled/linked program as the driver is free to do as it likes.
This. No explicit layout == query all the positions from the application side after you make the program. Its the standard way to do that in OpenGL, regardless of what NV driver does.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement